Jan 16 23:59:03.280399 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 16 23:59:03.280447 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 16 22:28:08 -00 2026 Jan 16 23:59:03.280474 kernel: KASLR disabled due to lack of seed Jan 16 23:59:03.280491 kernel: efi: EFI v2.7 by EDK II Jan 16 23:59:03.280507 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Jan 16 23:59:03.280523 kernel: ACPI: Early table checksum verification disabled Jan 16 23:59:03.280541 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 16 23:59:03.280557 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 16 23:59:03.280574 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 16 23:59:03.280589 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 16 23:59:03.280611 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 16 23:59:03.280626 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 16 23:59:03.280642 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 16 23:59:03.280659 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 16 23:59:03.280678 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 16 23:59:03.280702 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 16 23:59:03.280720 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 16 23:59:03.280737 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 16 23:59:03.280753 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 16 23:59:03.280770 kernel: printk: bootconsole [uart0] enabled Jan 16 23:59:03.280786 kernel: NUMA: Failed to initialise from firmware Jan 16 23:59:03.280803 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 16 23:59:03.282031 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 16 23:59:03.282060 kernel: Zone ranges: Jan 16 23:59:03.282077 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 16 23:59:03.282094 kernel: DMA32 empty Jan 16 23:59:03.282122 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 16 23:59:03.282138 kernel: Movable zone start for each node Jan 16 23:59:03.282155 kernel: Early memory node ranges Jan 16 23:59:03.282171 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 16 23:59:03.282188 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 16 23:59:03.282204 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 16 23:59:03.282221 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 16 23:59:03.282237 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 16 23:59:03.282254 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 16 23:59:03.282270 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 16 23:59:03.282287 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 16 23:59:03.282303 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 16 23:59:03.282324 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 16 23:59:03.282341 kernel: psci: probing for conduit method from ACPI. Jan 16 23:59:03.282365 kernel: psci: PSCIv1.0 detected in firmware. Jan 16 23:59:03.282384 kernel: psci: Using standard PSCI v0.2 function IDs Jan 16 23:59:03.282401 kernel: psci: Trusted OS migration not required Jan 16 23:59:03.282423 kernel: psci: SMC Calling Convention v1.1 Jan 16 23:59:03.282441 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jan 16 23:59:03.282458 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 16 23:59:03.282476 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 16 23:59:03.282494 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 16 23:59:03.282511 kernel: Detected PIPT I-cache on CPU0 Jan 16 23:59:03.282529 kernel: CPU features: detected: GIC system register CPU interface Jan 16 23:59:03.282546 kernel: CPU features: detected: Spectre-v2 Jan 16 23:59:03.282563 kernel: CPU features: detected: Spectre-v3a Jan 16 23:59:03.282580 kernel: CPU features: detected: Spectre-BHB Jan 16 23:59:03.282597 kernel: CPU features: detected: ARM erratum 1742098 Jan 16 23:59:03.282619 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 16 23:59:03.282636 kernel: alternatives: applying boot alternatives Jan 16 23:59:03.282656 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 16 23:59:03.282674 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 16 23:59:03.282692 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 16 23:59:03.282709 kernel: Fallback order for Node 0: 0 Jan 16 23:59:03.282726 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 16 23:59:03.282743 kernel: Policy zone: Normal Jan 16 23:59:03.282760 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 16 23:59:03.282777 kernel: software IO TLB: area num 2. Jan 16 23:59:03.282795 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 16 23:59:03.282840 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Jan 16 23:59:03.282861 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 16 23:59:03.282879 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 16 23:59:03.282898 kernel: rcu: RCU event tracing is enabled. Jan 16 23:59:03.282916 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 16 23:59:03.282934 kernel: Trampoline variant of Tasks RCU enabled. Jan 16 23:59:03.282952 kernel: Tracing variant of Tasks RCU enabled. Jan 16 23:59:03.282970 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 16 23:59:03.282988 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 16 23:59:03.283006 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 16 23:59:03.283024 kernel: GICv3: 96 SPIs implemented Jan 16 23:59:03.283049 kernel: GICv3: 0 Extended SPIs implemented Jan 16 23:59:03.283067 kernel: Root IRQ handler: gic_handle_irq Jan 16 23:59:03.283085 kernel: GICv3: GICv3 features: 16 PPIs Jan 16 23:59:03.283102 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 16 23:59:03.283119 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 16 23:59:03.283137 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 16 23:59:03.283154 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 16 23:59:03.283172 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 16 23:59:03.283190 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 16 23:59:03.283208 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 16 23:59:03.283225 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 16 23:59:03.283242 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 16 23:59:03.283265 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 16 23:59:03.283283 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 16 23:59:03.283301 kernel: Console: colour dummy device 80x25 Jan 16 23:59:03.283319 kernel: printk: console [tty1] enabled Jan 16 23:59:03.283337 kernel: ACPI: Core revision 20230628 Jan 16 23:59:03.283355 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 16 23:59:03.283373 kernel: pid_max: default: 32768 minimum: 301 Jan 16 23:59:03.283392 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 16 23:59:03.283409 kernel: landlock: Up and running. Jan 16 23:59:03.283432 kernel: SELinux: Initializing. Jan 16 23:59:03.283450 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 16 23:59:03.283468 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 16 23:59:03.283486 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 23:59:03.283504 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 23:59:03.283522 kernel: rcu: Hierarchical SRCU implementation. Jan 16 23:59:03.283540 kernel: rcu: Max phase no-delay instances is 400. Jan 16 23:59:03.283558 kernel: Platform MSI: ITS@0x10080000 domain created Jan 16 23:59:03.283575 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 16 23:59:03.283597 kernel: Remapping and enabling EFI services. Jan 16 23:59:03.283615 kernel: smp: Bringing up secondary CPUs ... Jan 16 23:59:03.283633 kernel: Detected PIPT I-cache on CPU1 Jan 16 23:59:03.283650 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 16 23:59:03.283687 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 16 23:59:03.283706 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 16 23:59:03.283724 kernel: smp: Brought up 1 node, 2 CPUs Jan 16 23:59:03.283742 kernel: SMP: Total of 2 processors activated. Jan 16 23:59:03.283759 kernel: CPU features: detected: 32-bit EL0 Support Jan 16 23:59:03.283783 kernel: CPU features: detected: 32-bit EL1 Support Jan 16 23:59:03.283801 kernel: CPU features: detected: CRC32 instructions Jan 16 23:59:03.286271 kernel: CPU: All CPU(s) started at EL1 Jan 16 23:59:03.286318 kernel: alternatives: applying system-wide alternatives Jan 16 23:59:03.286342 kernel: devtmpfs: initialized Jan 16 23:59:03.286361 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 16 23:59:03.286380 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 16 23:59:03.286399 kernel: pinctrl core: initialized pinctrl subsystem Jan 16 23:59:03.286417 kernel: SMBIOS 3.0.0 present. Jan 16 23:59:03.286441 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 16 23:59:03.286460 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 16 23:59:03.286478 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 16 23:59:03.286497 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 16 23:59:03.286516 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 16 23:59:03.286534 kernel: audit: initializing netlink subsys (disabled) Jan 16 23:59:03.286553 kernel: audit: type=2000 audit(0.294:1): state=initialized audit_enabled=0 res=1 Jan 16 23:59:03.286572 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 16 23:59:03.286595 kernel: cpuidle: using governor menu Jan 16 23:59:03.286614 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 16 23:59:03.286633 kernel: ASID allocator initialised with 65536 entries Jan 16 23:59:03.286651 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 16 23:59:03.286670 kernel: Serial: AMBA PL011 UART driver Jan 16 23:59:03.286688 kernel: Modules: 17488 pages in range for non-PLT usage Jan 16 23:59:03.286707 kernel: Modules: 509008 pages in range for PLT usage Jan 16 23:59:03.286725 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 16 23:59:03.286744 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 16 23:59:03.286767 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 16 23:59:03.286786 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 16 23:59:03.286804 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 16 23:59:03.286848 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 16 23:59:03.286868 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 16 23:59:03.286887 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 16 23:59:03.286906 kernel: ACPI: Added _OSI(Module Device) Jan 16 23:59:03.286925 kernel: ACPI: Added _OSI(Processor Device) Jan 16 23:59:03.286943 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 16 23:59:03.286968 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 16 23:59:03.286987 kernel: ACPI: Interpreter enabled Jan 16 23:59:03.287006 kernel: ACPI: Using GIC for interrupt routing Jan 16 23:59:03.287024 kernel: ACPI: MCFG table detected, 1 entries Jan 16 23:59:03.287063 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Jan 16 23:59:03.287423 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 16 23:59:03.287689 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 16 23:59:03.288035 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 16 23:59:03.288250 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Jan 16 23:59:03.288456 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Jan 16 23:59:03.288482 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 16 23:59:03.288502 kernel: acpiphp: Slot [1] registered Jan 16 23:59:03.288520 kernel: acpiphp: Slot [2] registered Jan 16 23:59:03.288539 kernel: acpiphp: Slot [3] registered Jan 16 23:59:03.288558 kernel: acpiphp: Slot [4] registered Jan 16 23:59:03.288576 kernel: acpiphp: Slot [5] registered Jan 16 23:59:03.288601 kernel: acpiphp: Slot [6] registered Jan 16 23:59:03.288620 kernel: acpiphp: Slot [7] registered Jan 16 23:59:03.288638 kernel: acpiphp: Slot [8] registered Jan 16 23:59:03.288657 kernel: acpiphp: Slot [9] registered Jan 16 23:59:03.288676 kernel: acpiphp: Slot [10] registered Jan 16 23:59:03.288695 kernel: acpiphp: Slot [11] registered Jan 16 23:59:03.288713 kernel: acpiphp: Slot [12] registered Jan 16 23:59:03.288732 kernel: acpiphp: Slot [13] registered Jan 16 23:59:03.288750 kernel: acpiphp: Slot [14] registered Jan 16 23:59:03.288768 kernel: acpiphp: Slot [15] registered Jan 16 23:59:03.288792 kernel: acpiphp: Slot [16] registered Jan 16 23:59:03.297453 kernel: acpiphp: Slot [17] registered Jan 16 23:59:03.297509 kernel: acpiphp: Slot [18] registered Jan 16 23:59:03.297528 kernel: acpiphp: Slot [19] registered Jan 16 23:59:03.297547 kernel: acpiphp: Slot [20] registered Jan 16 23:59:03.297566 kernel: acpiphp: Slot [21] registered Jan 16 23:59:03.297584 kernel: acpiphp: Slot [22] registered Jan 16 23:59:03.297603 kernel: acpiphp: Slot [23] registered Jan 16 23:59:03.297621 kernel: acpiphp: Slot [24] registered Jan 16 23:59:03.297652 kernel: acpiphp: Slot [25] registered Jan 16 23:59:03.297672 kernel: acpiphp: Slot [26] registered Jan 16 23:59:03.297690 kernel: acpiphp: Slot [27] registered Jan 16 23:59:03.297710 kernel: acpiphp: Slot [28] registered Jan 16 23:59:03.297729 kernel: acpiphp: Slot [29] registered Jan 16 23:59:03.297748 kernel: acpiphp: Slot [30] registered Jan 16 23:59:03.297768 kernel: acpiphp: Slot [31] registered Jan 16 23:59:03.297787 kernel: PCI host bridge to bus 0000:00 Jan 16 23:59:03.299358 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 16 23:59:03.299603 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 16 23:59:03.299834 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 16 23:59:03.302806 kernel: pci_bus 0000:00: root bus resource [bus 00] Jan 16 23:59:03.303110 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 16 23:59:03.303342 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 16 23:59:03.303558 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 16 23:59:03.303801 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 16 23:59:03.304051 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 16 23:59:03.304262 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 16 23:59:03.304482 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 16 23:59:03.304685 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 16 23:59:03.305969 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 16 23:59:03.306206 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 16 23:59:03.306431 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 16 23:59:03.306633 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 16 23:59:03.307366 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 16 23:59:03.307600 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 16 23:59:03.307629 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 16 23:59:03.307650 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 16 23:59:03.307673 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 16 23:59:03.307692 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 16 23:59:03.307723 kernel: iommu: Default domain type: Translated Jan 16 23:59:03.307742 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 16 23:59:03.307762 kernel: efivars: Registered efivars operations Jan 16 23:59:03.307780 kernel: vgaarb: loaded Jan 16 23:59:03.307799 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 16 23:59:03.307921 kernel: VFS: Disk quotas dquot_6.6.0 Jan 16 23:59:03.307943 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 16 23:59:03.307963 kernel: pnp: PnP ACPI init Jan 16 23:59:03.308201 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 16 23:59:03.308237 kernel: pnp: PnP ACPI: found 1 devices Jan 16 23:59:03.308256 kernel: NET: Registered PF_INET protocol family Jan 16 23:59:03.308275 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 16 23:59:03.308294 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 16 23:59:03.308314 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 16 23:59:03.308333 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 16 23:59:03.308351 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 16 23:59:03.308370 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 16 23:59:03.308394 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 16 23:59:03.308413 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 16 23:59:03.308432 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 16 23:59:03.308450 kernel: PCI: CLS 0 bytes, default 64 Jan 16 23:59:03.308469 kernel: kvm [1]: HYP mode not available Jan 16 23:59:03.308488 kernel: Initialise system trusted keyrings Jan 16 23:59:03.308506 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 16 23:59:03.308525 kernel: Key type asymmetric registered Jan 16 23:59:03.308543 kernel: Asymmetric key parser 'x509' registered Jan 16 23:59:03.308567 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 16 23:59:03.308586 kernel: io scheduler mq-deadline registered Jan 16 23:59:03.308605 kernel: io scheduler kyber registered Jan 16 23:59:03.308623 kernel: io scheduler bfq registered Jan 16 23:59:03.308867 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 16 23:59:03.308897 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 16 23:59:03.308917 kernel: ACPI: button: Power Button [PWRB] Jan 16 23:59:03.308936 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 16 23:59:03.308962 kernel: ACPI: button: Sleep Button [SLPB] Jan 16 23:59:03.308982 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 16 23:59:03.309001 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 16 23:59:03.309220 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 16 23:59:03.309247 kernel: printk: console [ttyS0] disabled Jan 16 23:59:03.309266 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 16 23:59:03.309286 kernel: printk: console [ttyS0] enabled Jan 16 23:59:03.309304 kernel: printk: bootconsole [uart0] disabled Jan 16 23:59:03.309323 kernel: thunder_xcv, ver 1.0 Jan 16 23:59:03.309362 kernel: thunder_bgx, ver 1.0 Jan 16 23:59:03.309390 kernel: nicpf, ver 1.0 Jan 16 23:59:03.309409 kernel: nicvf, ver 1.0 Jan 16 23:59:03.309635 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 16 23:59:03.309879 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-16T23:59:02 UTC (1768607942) Jan 16 23:59:03.309907 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 16 23:59:03.309928 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 16 23:59:03.309947 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 16 23:59:03.309975 kernel: watchdog: Hard watchdog permanently disabled Jan 16 23:59:03.309994 kernel: NET: Registered PF_INET6 protocol family Jan 16 23:59:03.310013 kernel: Segment Routing with IPv6 Jan 16 23:59:03.310032 kernel: In-situ OAM (IOAM) with IPv6 Jan 16 23:59:03.310051 kernel: NET: Registered PF_PACKET protocol family Jan 16 23:59:03.310069 kernel: Key type dns_resolver registered Jan 16 23:59:03.310088 kernel: registered taskstats version 1 Jan 16 23:59:03.310108 kernel: Loading compiled-in X.509 certificates Jan 16 23:59:03.310127 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: 0aabad27df82424bfffc9b1a502a9ae84b35bad4' Jan 16 23:59:03.310147 kernel: Key type .fscrypt registered Jan 16 23:59:03.310171 kernel: Key type fscrypt-provisioning registered Jan 16 23:59:03.310190 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 16 23:59:03.310209 kernel: ima: Allocated hash algorithm: sha1 Jan 16 23:59:03.310228 kernel: ima: No architecture policies found Jan 16 23:59:03.310247 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 16 23:59:03.310266 kernel: clk: Disabling unused clocks Jan 16 23:59:03.310284 kernel: Freeing unused kernel memory: 39424K Jan 16 23:59:03.310303 kernel: Run /init as init process Jan 16 23:59:03.310321 kernel: with arguments: Jan 16 23:59:03.310345 kernel: /init Jan 16 23:59:03.310363 kernel: with environment: Jan 16 23:59:03.310381 kernel: HOME=/ Jan 16 23:59:03.310399 kernel: TERM=linux Jan 16 23:59:03.310423 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 16 23:59:03.310447 systemd[1]: Detected virtualization amazon. Jan 16 23:59:03.310469 systemd[1]: Detected architecture arm64. Jan 16 23:59:03.310493 systemd[1]: Running in initrd. Jan 16 23:59:03.310513 systemd[1]: No hostname configured, using default hostname. Jan 16 23:59:03.310533 systemd[1]: Hostname set to . Jan 16 23:59:03.310554 systemd[1]: Initializing machine ID from VM UUID. Jan 16 23:59:03.310574 systemd[1]: Queued start job for default target initrd.target. Jan 16 23:59:03.310594 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 23:59:03.310615 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 23:59:03.310636 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 16 23:59:03.310662 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 23:59:03.310683 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 16 23:59:03.310704 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 16 23:59:03.310727 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 16 23:59:03.310748 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 16 23:59:03.310770 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 23:59:03.310791 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 23:59:03.310859 systemd[1]: Reached target paths.target - Path Units. Jan 16 23:59:03.310885 systemd[1]: Reached target slices.target - Slice Units. Jan 16 23:59:03.310907 systemd[1]: Reached target swap.target - Swaps. Jan 16 23:59:03.310927 systemd[1]: Reached target timers.target - Timer Units. Jan 16 23:59:03.310948 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 23:59:03.310968 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 23:59:03.310990 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 16 23:59:03.311010 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 16 23:59:03.311030 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 23:59:03.311057 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 23:59:03.311078 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 23:59:03.311099 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 23:59:03.311119 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 16 23:59:03.311141 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 23:59:03.311161 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 16 23:59:03.311181 systemd[1]: Starting systemd-fsck-usr.service... Jan 16 23:59:03.311202 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 23:59:03.311227 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 23:59:03.311248 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 23:59:03.311268 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 16 23:59:03.311289 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 23:59:03.311351 systemd-journald[251]: Collecting audit messages is disabled. Jan 16 23:59:03.311401 systemd[1]: Finished systemd-fsck-usr.service. Jan 16 23:59:03.311423 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 16 23:59:03.311444 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 23:59:03.311465 kernel: Bridge firewalling registered Jan 16 23:59:03.311491 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 23:59:03.311514 systemd-journald[251]: Journal started Jan 16 23:59:03.311552 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2582644125acdd1877559f008ac2bd) is 8.0M, max 75.3M, 67.3M free. Jan 16 23:59:03.249698 systemd-modules-load[252]: Inserted module 'overlay' Jan 16 23:59:03.318156 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 23:59:03.304885 systemd-modules-load[252]: Inserted module 'br_netfilter' Jan 16 23:59:03.326276 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:59:03.334881 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 23:59:03.351303 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 23:59:03.365087 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 23:59:03.370547 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 23:59:03.375097 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 23:59:03.407543 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 23:59:03.425955 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 23:59:03.433493 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 23:59:03.448244 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 23:59:03.452042 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 23:59:03.463090 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 16 23:59:03.500708 dracut-cmdline[288]: dracut-dracut-053 Jan 16 23:59:03.511785 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=d499dc3f7d5d4118d4e4300ad00f17ad72271d2a2f6bb9119457036ac5212c83 Jan 16 23:59:03.551434 systemd-resolved[285]: Positive Trust Anchors: Jan 16 23:59:03.551479 systemd-resolved[285]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 23:59:03.551544 systemd-resolved[285]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 23:59:03.711859 kernel: SCSI subsystem initialized Jan 16 23:59:03.721862 kernel: Loading iSCSI transport class v2.0-870. Jan 16 23:59:03.732856 kernel: iscsi: registered transport (tcp) Jan 16 23:59:03.756356 kernel: iscsi: registered transport (qla4xxx) Jan 16 23:59:03.756447 kernel: QLogic iSCSI HBA Driver Jan 16 23:59:03.807849 kernel: random: crng init done Jan 16 23:59:03.808375 systemd-resolved[285]: Defaulting to hostname 'linux'. Jan 16 23:59:03.810694 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 23:59:03.821366 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 23:59:03.855468 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 16 23:59:03.874135 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 16 23:59:03.915776 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 16 23:59:03.915889 kernel: device-mapper: uevent: version 1.0.3 Jan 16 23:59:03.915921 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 16 23:59:03.990880 kernel: raid6: neonx8 gen() 6657 MB/s Jan 16 23:59:04.007904 kernel: raid6: neonx4 gen() 6498 MB/s Jan 16 23:59:04.025873 kernel: raid6: neonx2 gen() 5354 MB/s Jan 16 23:59:04.042880 kernel: raid6: neonx1 gen() 3890 MB/s Jan 16 23:59:04.060893 kernel: raid6: int64x8 gen() 3798 MB/s Jan 16 23:59:04.078875 kernel: raid6: int64x4 gen() 3670 MB/s Jan 16 23:59:04.095890 kernel: raid6: int64x2 gen() 3568 MB/s Jan 16 23:59:04.114126 kernel: raid6: int64x1 gen() 2715 MB/s Jan 16 23:59:04.114222 kernel: raid6: using algorithm neonx8 gen() 6657 MB/s Jan 16 23:59:04.133040 kernel: raid6: .... xor() 4710 MB/s, rmw enabled Jan 16 23:59:04.133132 kernel: raid6: using neon recovery algorithm Jan 16 23:59:04.143301 kernel: xor: measuring software checksum speed Jan 16 23:59:04.143384 kernel: 8regs : 11006 MB/sec Jan 16 23:59:04.144753 kernel: 32regs : 11915 MB/sec Jan 16 23:59:04.147574 kernel: arm64_neon : 9030 MB/sec Jan 16 23:59:04.147643 kernel: xor: using function: 32regs (11915 MB/sec) Jan 16 23:59:04.236869 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 16 23:59:04.260994 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 16 23:59:04.272310 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 23:59:04.310308 systemd-udevd[469]: Using default interface naming scheme 'v255'. Jan 16 23:59:04.318675 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 23:59:04.337283 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 16 23:59:04.377791 dracut-pre-trigger[475]: rd.md=0: removing MD RAID activation Jan 16 23:59:04.442004 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 23:59:04.456247 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 23:59:04.585465 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 23:59:04.599298 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 16 23:59:04.649306 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 16 23:59:04.662420 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 23:59:04.672358 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 23:59:04.678338 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 23:59:04.696131 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 16 23:59:04.747576 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 16 23:59:04.809983 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 16 23:59:04.810058 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 16 23:59:04.816762 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 23:59:04.824706 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 16 23:59:04.825047 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 16 23:59:04.817315 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 23:59:04.835787 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 23:59:04.849880 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:b8:38:68:72:95 Jan 16 23:59:04.838629 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 23:59:04.838943 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:59:04.840978 (udev-worker)[539]: Network interface NamePolicy= disabled on kernel command line. Jan 16 23:59:04.843127 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 23:59:04.864023 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 23:59:04.894227 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 16 23:59:04.894294 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 16 23:59:04.904853 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 16 23:59:04.911248 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:59:04.921093 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 16 23:59:04.921162 kernel: GPT:9289727 != 33554431 Jan 16 23:59:04.921188 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 16 23:59:04.921213 kernel: GPT:9289727 != 33554431 Jan 16 23:59:04.922866 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 16 23:59:04.924002 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 16 23:59:04.926320 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 23:59:04.963470 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 23:59:05.017882 kernel: BTRFS: device fsid 257557f7-4bf9-4b29-86df-93ad67770d31 devid 1 transid 37 /dev/nvme0n1p3 scanned by (udev-worker) (524) Jan 16 23:59:05.051309 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (517) Jan 16 23:59:05.121134 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 16 23:59:05.183197 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 16 23:59:05.202727 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 16 23:59:05.219852 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 16 23:59:05.222903 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 16 23:59:05.238252 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 16 23:59:05.254756 disk-uuid[658]: Primary Header is updated. Jan 16 23:59:05.254756 disk-uuid[658]: Secondary Entries is updated. Jan 16 23:59:05.254756 disk-uuid[658]: Secondary Header is updated. Jan 16 23:59:05.270876 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 16 23:59:05.280846 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 16 23:59:05.293861 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 16 23:59:06.292856 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 16 23:59:06.295445 disk-uuid[659]: The operation has completed successfully. Jan 16 23:59:06.525589 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 16 23:59:06.525916 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 16 23:59:06.571133 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 16 23:59:06.588415 sh[1004]: Success Jan 16 23:59:06.617862 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 16 23:59:06.747578 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 16 23:59:06.754680 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 16 23:59:06.760233 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 16 23:59:06.797668 kernel: BTRFS info (device dm-0): first mount of filesystem 257557f7-4bf9-4b29-86df-93ad67770d31 Jan 16 23:59:06.797741 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 16 23:59:06.797768 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 16 23:59:06.799604 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 16 23:59:06.801006 kernel: BTRFS info (device dm-0): using free space tree Jan 16 23:59:06.828861 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 16 23:59:06.841928 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 16 23:59:06.846512 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 16 23:59:06.856131 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 16 23:59:06.873230 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 16 23:59:06.908984 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:59:06.909058 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 16 23:59:06.911104 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 16 23:59:06.933864 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 16 23:59:06.952464 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 16 23:59:06.956150 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:59:06.965932 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 16 23:59:06.978260 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 16 23:59:07.067125 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 23:59:07.084230 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 23:59:07.142581 systemd-networkd[1197]: lo: Link UP Jan 16 23:59:07.144530 systemd-networkd[1197]: lo: Gained carrier Jan 16 23:59:07.150988 systemd-networkd[1197]: Enumeration completed Jan 16 23:59:07.151204 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 23:59:07.154279 systemd[1]: Reached target network.target - Network. Jan 16 23:59:07.156196 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:59:07.156203 systemd-networkd[1197]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 23:59:07.165053 systemd-networkd[1197]: eth0: Link UP Jan 16 23:59:07.165062 systemd-networkd[1197]: eth0: Gained carrier Jan 16 23:59:07.165087 systemd-networkd[1197]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:59:07.220000 systemd-networkd[1197]: eth0: DHCPv4 address 172.31.23.167/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 16 23:59:07.238921 ignition[1133]: Ignition 2.19.0 Jan 16 23:59:07.238958 ignition[1133]: Stage: fetch-offline Jan 16 23:59:07.241024 ignition[1133]: no configs at "/usr/lib/ignition/base.d" Jan 16 23:59:07.248512 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 23:59:07.241056 ignition[1133]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 16 23:59:07.241801 ignition[1133]: Ignition finished successfully Jan 16 23:59:07.272431 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 16 23:59:07.302253 ignition[1208]: Ignition 2.19.0 Jan 16 23:59:07.302287 ignition[1208]: Stage: fetch Jan 16 23:59:07.303095 ignition[1208]: no configs at "/usr/lib/ignition/base.d" Jan 16 23:59:07.303126 ignition[1208]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 16 23:59:07.303310 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 16 23:59:07.321362 ignition[1208]: PUT result: OK Jan 16 23:59:07.326146 ignition[1208]: parsed url from cmdline: "" Jan 16 23:59:07.326162 ignition[1208]: no config URL provided Jan 16 23:59:07.326203 ignition[1208]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 23:59:07.326233 ignition[1208]: no config at "/usr/lib/ignition/user.ign" Jan 16 23:59:07.326561 ignition[1208]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 16 23:59:07.339046 ignition[1208]: PUT result: OK Jan 16 23:59:07.339142 ignition[1208]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 16 23:59:07.342541 ignition[1208]: GET result: OK Jan 16 23:59:07.342779 ignition[1208]: parsing config with SHA512: e69d01166bcbcab12232c7f7a55122cd1eb806552f6d6d89b75f6ab3a82a4ec232a667cb1ed18b00304fda7973af6fa9b34407f86cbb5cd657c14f9ce40fe4bd Jan 16 23:59:07.354962 unknown[1208]: fetched base config from "system" Jan 16 23:59:07.354996 unknown[1208]: fetched base config from "system" Jan 16 23:59:07.355011 unknown[1208]: fetched user config from "aws" Jan 16 23:59:07.360745 ignition[1208]: fetch: fetch complete Jan 16 23:59:07.360761 ignition[1208]: fetch: fetch passed Jan 16 23:59:07.370527 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 16 23:59:07.364847 ignition[1208]: Ignition finished successfully Jan 16 23:59:07.388276 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 16 23:59:07.418263 ignition[1215]: Ignition 2.19.0 Jan 16 23:59:07.418295 ignition[1215]: Stage: kargs Jan 16 23:59:07.419097 ignition[1215]: no configs at "/usr/lib/ignition/base.d" Jan 16 23:59:07.419126 ignition[1215]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 16 23:59:07.419314 ignition[1215]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 16 23:59:07.421721 ignition[1215]: PUT result: OK Jan 16 23:59:07.436673 ignition[1215]: kargs: kargs passed Jan 16 23:59:07.436840 ignition[1215]: Ignition finished successfully Jan 16 23:59:07.443910 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 16 23:59:07.456192 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 16 23:59:07.488541 ignition[1221]: Ignition 2.19.0 Jan 16 23:59:07.488572 ignition[1221]: Stage: disks Jan 16 23:59:07.490548 ignition[1221]: no configs at "/usr/lib/ignition/base.d" Jan 16 23:59:07.490577 ignition[1221]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 16 23:59:07.491948 ignition[1221]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 16 23:59:07.499684 ignition[1221]: PUT result: OK Jan 16 23:59:07.505182 ignition[1221]: disks: disks passed Jan 16 23:59:07.505297 ignition[1221]: Ignition finished successfully Jan 16 23:59:07.510906 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 16 23:59:07.516225 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 16 23:59:07.519173 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 16 23:59:07.522169 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 23:59:07.524793 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 23:59:07.528140 systemd[1]: Reached target basic.target - Basic System. Jan 16 23:59:07.546257 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 16 23:59:07.594282 systemd-fsck[1230]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 16 23:59:07.601894 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 16 23:59:07.614024 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 16 23:59:07.718882 kernel: EXT4-fs (nvme0n1p9): mounted filesystem b70ce012-b356-4603-a688-ee0b3b7de551 r/w with ordered data mode. Quota mode: none. Jan 16 23:59:07.720376 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 16 23:59:07.725559 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 16 23:59:07.749028 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 23:59:07.756276 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 16 23:59:07.761759 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 16 23:59:07.766471 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 16 23:59:07.769132 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 23:59:07.800969 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1249) Jan 16 23:59:07.806000 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:59:07.806104 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 16 23:59:07.806137 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 16 23:59:07.804709 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 16 23:59:07.821341 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 16 23:59:07.831868 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 16 23:59:07.836177 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 23:59:07.934605 initrd-setup-root[1273]: cut: /sysroot/etc/passwd: No such file or directory Jan 16 23:59:07.944518 initrd-setup-root[1280]: cut: /sysroot/etc/group: No such file or directory Jan 16 23:59:07.955386 initrd-setup-root[1287]: cut: /sysroot/etc/shadow: No such file or directory Jan 16 23:59:07.965447 initrd-setup-root[1294]: cut: /sysroot/etc/gshadow: No such file or directory Jan 16 23:59:08.150567 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 16 23:59:08.160233 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 16 23:59:08.168073 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 16 23:59:08.188705 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 16 23:59:08.196416 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:59:08.240041 ignition[1362]: INFO : Ignition 2.19.0 Jan 16 23:59:08.240041 ignition[1362]: INFO : Stage: mount Jan 16 23:59:08.239882 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 16 23:59:08.251298 ignition[1362]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 23:59:08.251298 ignition[1362]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 16 23:59:08.251298 ignition[1362]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 16 23:59:08.261237 ignition[1362]: INFO : PUT result: OK Jan 16 23:59:08.266365 ignition[1362]: INFO : mount: mount passed Jan 16 23:59:08.266365 ignition[1362]: INFO : Ignition finished successfully Jan 16 23:59:08.271308 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 16 23:59:08.283078 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 16 23:59:08.311257 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 23:59:08.342994 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1373) Jan 16 23:59:08.348030 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 629d412e-8b84-495a-b9b7-c361e81b0700 Jan 16 23:59:08.348115 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 16 23:59:08.348144 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 16 23:59:08.355856 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 16 23:59:08.360170 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 23:59:08.403607 ignition[1390]: INFO : Ignition 2.19.0 Jan 16 23:59:08.403607 ignition[1390]: INFO : Stage: files Jan 16 23:59:08.407708 ignition[1390]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 23:59:08.407708 ignition[1390]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 16 23:59:08.407708 ignition[1390]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 16 23:59:08.416533 ignition[1390]: INFO : PUT result: OK Jan 16 23:59:08.422026 ignition[1390]: DEBUG : files: compiled without relabeling support, skipping Jan 16 23:59:08.425561 ignition[1390]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 16 23:59:08.425561 ignition[1390]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 16 23:59:08.435060 ignition[1390]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 16 23:59:08.439273 ignition[1390]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 16 23:59:08.439273 ignition[1390]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 16 23:59:08.436342 unknown[1390]: wrote ssh authorized keys file for user: core Jan 16 23:59:08.448059 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 16 23:59:08.448059 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/etc/flatcar-cgroupv1" Jan 16 23:59:08.448059 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 16 23:59:08.448059 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 16 23:59:08.571628 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): GET result: OK Jan 16 23:59:08.752895 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 16 23:59:08.752895 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/install.sh" Jan 16 23:59:08.752895 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/install.sh" Jan 16 23:59:08.752895 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 16 23:59:08.752895 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 16 23:59:08.752895 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 23:59:08.778173 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 23:59:08.778173 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 23:59:08.778173 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 23:59:08.778173 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 23:59:08.778173 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 23:59:08.778173 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 16 23:59:08.778173 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 16 23:59:08.778173 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(b): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 16 23:59:08.778173 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 16 23:59:09.000019 systemd-networkd[1197]: eth0: Gained IPv6LL Jan 16 23:59:09.225714 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(b): GET result: OK Jan 16 23:59:09.623117 ignition[1390]: INFO : files: createFilesystemsFiles: createFiles: op(b): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 16 23:59:09.623117 ignition[1390]: INFO : files: op(c): [started] processing unit "containerd.service" Jan 16 23:59:09.632257 ignition[1390]: INFO : files: op(c): op(d): [started] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 16 23:59:09.632257 ignition[1390]: INFO : files: op(c): op(d): [finished] writing systemd drop-in "10-use-cgroupfs.conf" at "/sysroot/etc/systemd/system/containerd.service.d/10-use-cgroupfs.conf" Jan 16 23:59:09.632257 ignition[1390]: INFO : files: op(c): [finished] processing unit "containerd.service" Jan 16 23:59:09.632257 ignition[1390]: INFO : files: op(e): [started] processing unit "prepare-helm.service" Jan 16 23:59:09.632257 ignition[1390]: INFO : files: op(e): op(f): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 23:59:09.632257 ignition[1390]: INFO : files: op(e): op(f): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 23:59:09.632257 ignition[1390]: INFO : files: op(e): [finished] processing unit "prepare-helm.service" Jan 16 23:59:09.632257 ignition[1390]: INFO : files: op(10): [started] setting preset to enabled for "prepare-helm.service" Jan 16 23:59:09.632257 ignition[1390]: INFO : files: op(10): [finished] setting preset to enabled for "prepare-helm.service" Jan 16 23:59:09.632257 ignition[1390]: INFO : files: createResultFile: createFiles: op(11): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 16 23:59:09.632257 ignition[1390]: INFO : files: createResultFile: createFiles: op(11): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 16 23:59:09.632257 ignition[1390]: INFO : files: files passed Jan 16 23:59:09.632257 ignition[1390]: INFO : Ignition finished successfully Jan 16 23:59:09.647884 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 16 23:59:09.672034 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 16 23:59:09.705131 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 16 23:59:09.718983 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 16 23:59:09.720400 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 16 23:59:09.752741 initrd-setup-root-after-ignition[1419]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 23:59:09.752741 initrd-setup-root-after-ignition[1419]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 16 23:59:09.762923 initrd-setup-root-after-ignition[1423]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 23:59:09.766664 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 23:59:09.774088 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 16 23:59:09.786294 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 16 23:59:09.856629 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 16 23:59:09.857174 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 16 23:59:09.866840 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 16 23:59:09.870108 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 16 23:59:09.873091 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 16 23:59:09.886267 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 16 23:59:09.919923 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 23:59:09.934390 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 16 23:59:09.961681 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 16 23:59:09.965113 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 23:59:09.968360 systemd[1]: Stopped target timers.target - Timer Units. Jan 16 23:59:09.971136 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 16 23:59:09.971668 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 23:59:09.984263 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 16 23:59:09.992243 systemd[1]: Stopped target basic.target - Basic System. Jan 16 23:59:09.996043 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 16 23:59:10.004071 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 23:59:10.007367 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 16 23:59:10.011120 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 16 23:59:10.021883 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 23:59:10.025179 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 16 23:59:10.028906 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 16 23:59:10.039590 systemd[1]: Stopped target swap.target - Swaps. Jan 16 23:59:10.042281 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 16 23:59:10.042530 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 16 23:59:10.052184 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 16 23:59:10.055601 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 23:59:10.061583 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 16 23:59:10.066357 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 23:59:10.070129 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 16 23:59:10.070391 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 16 23:59:10.082591 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 16 23:59:10.082945 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 23:59:10.086880 systemd[1]: ignition-files.service: Deactivated successfully. Jan 16 23:59:10.087173 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 16 23:59:10.107314 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 16 23:59:10.110407 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 16 23:59:10.112662 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 23:59:10.127109 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 16 23:59:10.136589 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 16 23:59:10.136990 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 23:59:10.142232 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 16 23:59:10.142505 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 23:59:10.164759 ignition[1443]: INFO : Ignition 2.19.0 Jan 16 23:59:10.164759 ignition[1443]: INFO : Stage: umount Jan 16 23:59:10.164759 ignition[1443]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 23:59:10.164759 ignition[1443]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 16 23:59:10.164759 ignition[1443]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 16 23:59:10.192611 ignition[1443]: INFO : PUT result: OK Jan 16 23:59:10.175392 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 16 23:59:10.203299 ignition[1443]: INFO : umount: umount passed Jan 16 23:59:10.203299 ignition[1443]: INFO : Ignition finished successfully Jan 16 23:59:10.177116 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 16 23:59:10.203301 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 16 23:59:10.207587 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 16 23:59:10.210685 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 16 23:59:10.210782 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 16 23:59:10.216032 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 16 23:59:10.216196 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 16 23:59:10.226001 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 16 23:59:10.226115 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 16 23:59:10.228586 systemd[1]: Stopped target network.target - Network. Jan 16 23:59:10.230780 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 16 23:59:10.230949 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 23:59:10.233937 systemd[1]: Stopped target paths.target - Path Units. Jan 16 23:59:10.236104 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 16 23:59:10.243172 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 23:59:10.246652 systemd[1]: Stopped target slices.target - Slice Units. Jan 16 23:59:10.248976 systemd[1]: Stopped target sockets.target - Socket Units. Jan 16 23:59:10.251447 systemd[1]: iscsid.socket: Deactivated successfully. Jan 16 23:59:10.251544 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 23:59:10.254214 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 16 23:59:10.254319 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 23:59:10.257074 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 16 23:59:10.257214 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 16 23:59:10.259919 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 16 23:59:10.260035 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 16 23:59:10.264416 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 16 23:59:10.270459 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 16 23:59:10.294002 systemd-networkd[1197]: eth0: DHCPv6 lease lost Jan 16 23:59:10.310945 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 16 23:59:10.312094 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 16 23:59:10.312314 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 16 23:59:10.322707 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 16 23:59:10.322948 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 16 23:59:10.330578 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 16 23:59:10.334533 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 16 23:59:10.344204 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 16 23:59:10.344290 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 16 23:59:10.349712 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 16 23:59:10.350780 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 16 23:59:10.383101 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 16 23:59:10.385243 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 16 23:59:10.385378 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 23:59:10.388543 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 16 23:59:10.388623 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 16 23:59:10.392026 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 16 23:59:10.392109 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 16 23:59:10.394983 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 16 23:59:10.395063 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 23:59:10.398322 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 23:59:10.447996 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 16 23:59:10.462130 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 23:59:10.466198 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 16 23:59:10.466335 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 16 23:59:10.470216 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 16 23:59:10.470297 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 23:59:10.485928 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 16 23:59:10.486035 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 16 23:59:10.488684 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 16 23:59:10.488766 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 16 23:59:10.491406 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 23:59:10.491491 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 23:59:10.508385 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 16 23:59:10.515498 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 16 23:59:10.515620 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 23:59:10.519105 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 23:59:10.519193 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:59:10.522687 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 16 23:59:10.522900 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 16 23:59:10.557162 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 16 23:59:10.558207 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 16 23:59:10.563710 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 16 23:59:10.581551 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 16 23:59:10.600978 systemd[1]: Switching root. Jan 16 23:59:10.646595 systemd-journald[251]: Journal stopped Jan 16 23:59:12.686159 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jan 16 23:59:12.686312 kernel: SELinux: policy capability network_peer_controls=1 Jan 16 23:59:12.686361 kernel: SELinux: policy capability open_perms=1 Jan 16 23:59:12.686394 kernel: SELinux: policy capability extended_socket_class=1 Jan 16 23:59:12.686428 kernel: SELinux: policy capability always_check_network=0 Jan 16 23:59:12.686466 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 16 23:59:12.686497 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 16 23:59:12.686529 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 16 23:59:12.686559 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 16 23:59:12.686594 kernel: audit: type=1403 audit(1768607951.037:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 16 23:59:12.686629 systemd[1]: Successfully loaded SELinux policy in 53.899ms. Jan 16 23:59:12.686686 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.238ms. Jan 16 23:59:12.686722 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 16 23:59:12.686755 systemd[1]: Detected virtualization amazon. Jan 16 23:59:12.686791 systemd[1]: Detected architecture arm64. Jan 16 23:59:12.686883 systemd[1]: Detected first boot. Jan 16 23:59:12.686920 systemd[1]: Initializing machine ID from VM UUID. Jan 16 23:59:12.686955 zram_generator::config[1508]: No configuration found. Jan 16 23:59:12.686991 systemd[1]: Populated /etc with preset unit settings. Jan 16 23:59:12.687025 systemd[1]: Queued start job for default target multi-user.target. Jan 16 23:59:12.687059 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 16 23:59:12.687095 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 16 23:59:12.687132 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 16 23:59:12.687178 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 16 23:59:12.687211 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 16 23:59:12.687244 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 16 23:59:12.687277 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 16 23:59:12.687311 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 16 23:59:12.687351 systemd[1]: Created slice user.slice - User and Session Slice. Jan 16 23:59:12.687382 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 23:59:12.687413 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 23:59:12.687448 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 16 23:59:12.687479 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 16 23:59:12.687512 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 16 23:59:12.687545 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 23:59:12.687577 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 16 23:59:12.687608 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 23:59:12.687638 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 16 23:59:12.687671 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 23:59:12.687707 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 23:59:12.687742 systemd[1]: Reached target slices.target - Slice Units. Jan 16 23:59:12.687788 systemd[1]: Reached target swap.target - Swaps. Jan 16 23:59:12.687845 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 16 23:59:12.687879 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 16 23:59:12.687912 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 16 23:59:12.687942 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 16 23:59:12.687983 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 23:59:12.688013 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 23:59:12.688054 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 23:59:12.688085 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 16 23:59:12.688119 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 16 23:59:12.688152 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 16 23:59:12.688184 systemd[1]: Mounting media.mount - External Media Directory... Jan 16 23:59:12.688218 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 16 23:59:12.688250 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 16 23:59:12.688288 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 16 23:59:12.688320 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 16 23:59:12.688355 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 23:59:12.688385 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 23:59:12.688415 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 16 23:59:12.688447 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 23:59:12.688480 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 23:59:12.688513 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 23:59:12.688546 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 16 23:59:12.688578 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 23:59:12.688616 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 16 23:59:12.688651 systemd[1]: systemd-journald.service: unit configures an IP firewall, but the local system does not support BPF/cgroup firewalling. Jan 16 23:59:12.688686 systemd[1]: systemd-journald.service: (This warning is only shown for the first unit using IP firewalling.) Jan 16 23:59:12.688719 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 23:59:12.688751 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 23:59:12.688782 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 16 23:59:12.688834 kernel: loop: module loaded Jan 16 23:59:12.688870 kernel: fuse: init (API version 7.39) Jan 16 23:59:12.688900 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 16 23:59:12.688937 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 23:59:12.688971 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 16 23:59:12.689005 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 16 23:59:12.689038 systemd[1]: Mounted media.mount - External Media Directory. Jan 16 23:59:12.689069 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 16 23:59:12.689098 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 16 23:59:12.689182 systemd-journald[1604]: Collecting audit messages is disabled. Jan 16 23:59:12.689251 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 16 23:59:12.689284 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 23:59:12.689336 systemd-journald[1604]: Journal started Jan 16 23:59:12.689393 systemd-journald[1604]: Runtime Journal (/run/log/journal/ec2582644125acdd1877559f008ac2bd) is 8.0M, max 75.3M, 67.3M free. Jan 16 23:59:12.697453 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 23:59:12.704416 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 16 23:59:12.704789 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 16 23:59:12.712022 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 23:59:12.712418 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 23:59:12.718924 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 23:59:12.719297 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 23:59:12.726874 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 16 23:59:12.727245 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 16 23:59:12.733572 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 23:59:12.734488 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 23:59:12.759576 kernel: ACPI: bus type drm_connector registered Jan 16 23:59:12.742049 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 23:59:12.748390 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 23:59:12.753271 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 16 23:59:12.761382 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 23:59:12.761788 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 23:59:12.780544 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 16 23:59:12.811712 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 16 23:59:12.826104 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 16 23:59:12.838043 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 16 23:59:12.841640 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 16 23:59:12.856196 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 16 23:59:12.868175 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 16 23:59:12.872517 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 23:59:12.892150 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 16 23:59:12.896134 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 23:59:12.910892 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 23:59:12.946992 systemd-journald[1604]: Time spent on flushing to /var/log/journal/ec2582644125acdd1877559f008ac2bd is 108.541ms for 884 entries. Jan 16 23:59:12.946992 systemd-journald[1604]: System Journal (/var/log/journal/ec2582644125acdd1877559f008ac2bd) is 8.0M, max 195.6M, 187.6M free. Jan 16 23:59:13.099301 systemd-journald[1604]: Received client request to flush runtime journal. Jan 16 23:59:12.936144 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 23:59:12.954888 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 16 23:59:12.962114 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 16 23:59:12.979680 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 16 23:59:12.984486 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 16 23:59:13.084985 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 23:59:13.111009 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 16 23:59:13.123778 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 16 23:59:13.130857 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 23:59:13.157532 systemd-tmpfiles[1657]: ACLs are not supported, ignoring. Jan 16 23:59:13.157574 systemd-tmpfiles[1657]: ACLs are not supported, ignoring. Jan 16 23:59:13.179638 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 23:59:13.196138 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 16 23:59:13.202902 udevadm[1666]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation-early.service, lvm2-activation.service not to pull it in. Jan 16 23:59:13.269922 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 16 23:59:13.281238 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 23:59:13.329073 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Jan 16 23:59:13.329718 systemd-tmpfiles[1679]: ACLs are not supported, ignoring. Jan 16 23:59:13.341692 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 23:59:13.923017 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 16 23:59:13.937180 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 23:59:14.007213 systemd-udevd[1685]: Using default interface naming scheme 'v255'. Jan 16 23:59:14.052229 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 23:59:14.075193 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 23:59:14.105121 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 16 23:59:14.210198 (udev-worker)[1699]: Network interface NamePolicy= disabled on kernel command line. Jan 16 23:59:14.241348 systemd[1]: Found device dev-ttyS0.device - /dev/ttyS0. Jan 16 23:59:14.345444 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 16 23:59:14.546271 systemd-networkd[1691]: lo: Link UP Jan 16 23:59:14.546303 systemd-networkd[1691]: lo: Gained carrier Jan 16 23:59:14.550317 systemd-networkd[1691]: Enumeration completed Jan 16 23:59:14.550601 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 23:59:14.554393 systemd-networkd[1691]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:59:14.554404 systemd-networkd[1691]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 23:59:14.558980 systemd-networkd[1691]: eth0: Link UP Jan 16 23:59:14.559404 systemd-networkd[1691]: eth0: Gained carrier Jan 16 23:59:14.559457 systemd-networkd[1691]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 16 23:59:14.578205 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 16 23:59:14.593002 systemd-networkd[1691]: eth0: DHCPv4 address 172.31.23.167/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 16 23:59:14.615871 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (1705) Jan 16 23:59:14.720417 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 23:59:14.899431 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 16 23:59:14.903557 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 16 23:59:14.917739 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 16 23:59:14.952802 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 23:59:14.961940 lvm[1811]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 16 23:59:15.004333 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 16 23:59:15.012236 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 23:59:15.023152 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 16 23:59:15.040489 lvm[1817]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 16 23:59:15.084543 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 16 23:59:15.091218 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 16 23:59:15.094477 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 16 23:59:15.094570 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 23:59:15.097583 systemd[1]: Reached target machines.target - Containers. Jan 16 23:59:15.102481 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 16 23:59:15.115418 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 16 23:59:15.132032 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 16 23:59:15.142259 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 23:59:15.150246 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 16 23:59:15.159228 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 16 23:59:15.167631 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 16 23:59:15.172662 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 16 23:59:15.213222 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 16 23:59:15.253745 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 16 23:59:15.258236 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 16 23:59:15.266940 kernel: loop0: detected capacity change from 0 to 114432 Jan 16 23:59:15.300874 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 16 23:59:15.325968 kernel: loop1: detected capacity change from 0 to 207008 Jan 16 23:59:15.467943 kernel: loop2: detected capacity change from 0 to 52536 Jan 16 23:59:15.597910 kernel: loop3: detected capacity change from 0 to 114328 Jan 16 23:59:15.657874 kernel: loop4: detected capacity change from 0 to 114432 Jan 16 23:59:15.682865 kernel: loop5: detected capacity change from 0 to 207008 Jan 16 23:59:15.722919 kernel: loop6: detected capacity change from 0 to 52536 Jan 16 23:59:15.723343 systemd-networkd[1691]: eth0: Gained IPv6LL Jan 16 23:59:15.732701 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 16 23:59:15.756887 kernel: loop7: detected capacity change from 0 to 114328 Jan 16 23:59:15.782406 (sd-merge)[1840]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 16 23:59:15.783600 (sd-merge)[1840]: Merged extensions into '/usr'. Jan 16 23:59:15.818535 systemd[1]: Reloading requested from client PID 1826 ('systemd-sysext') (unit systemd-sysext.service)... Jan 16 23:59:15.818575 systemd[1]: Reloading... Jan 16 23:59:15.953929 ldconfig[1822]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 16 23:59:16.014928 zram_generator::config[1873]: No configuration found. Jan 16 23:59:16.292551 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 23:59:16.464704 systemd[1]: Reloading finished in 645 ms. Jan 16 23:59:16.494158 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 16 23:59:16.497838 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 16 23:59:16.517199 systemd[1]: Starting ensure-sysext.service... Jan 16 23:59:16.527408 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 23:59:16.547211 systemd[1]: Reloading requested from client PID 1928 ('systemctl') (unit ensure-sysext.service)... Jan 16 23:59:16.547250 systemd[1]: Reloading... Jan 16 23:59:16.594123 systemd-tmpfiles[1929]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 16 23:59:16.594956 systemd-tmpfiles[1929]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 16 23:59:16.597105 systemd-tmpfiles[1929]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 16 23:59:16.597940 systemd-tmpfiles[1929]: ACLs are not supported, ignoring. Jan 16 23:59:16.598144 systemd-tmpfiles[1929]: ACLs are not supported, ignoring. Jan 16 23:59:16.607741 systemd-tmpfiles[1929]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 23:59:16.607782 systemd-tmpfiles[1929]: Skipping /boot Jan 16 23:59:16.637676 systemd-tmpfiles[1929]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 23:59:16.637715 systemd-tmpfiles[1929]: Skipping /boot Jan 16 23:59:16.741875 zram_generator::config[1958]: No configuration found. Jan 16 23:59:17.035595 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 23:59:17.210950 systemd[1]: Reloading finished in 662 ms. Jan 16 23:59:17.241092 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 23:59:17.270221 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 16 23:59:17.288320 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 16 23:59:17.296726 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 16 23:59:17.316154 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 23:59:17.323595 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 16 23:59:17.365052 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 23:59:17.376391 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 23:59:17.392532 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 23:59:17.413342 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 23:59:17.417231 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 23:59:17.429746 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 23:59:17.430213 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 23:59:17.436773 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 23:59:17.442716 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 23:59:17.470376 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 23:59:17.483578 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 23:59:17.502442 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 23:59:17.510719 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 23:59:17.523464 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 16 23:59:17.532356 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 23:59:17.532765 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 23:59:17.537252 augenrules[2049]: No rules Jan 16 23:59:17.547360 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 23:59:17.547784 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 23:59:17.557191 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 16 23:59:17.593621 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 16 23:59:17.598802 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 23:59:17.601190 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 23:59:17.616617 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 16 23:59:17.636595 systemd[1]: Finished ensure-sysext.service. Jan 16 23:59:17.641450 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 23:59:17.652218 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 23:59:17.660204 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 23:59:17.680115 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 23:59:17.686372 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 23:59:17.686492 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 23:59:17.686577 systemd[1]: Reached target time-set.target - System Time Set. Jan 16 23:59:17.702514 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 16 23:59:17.703040 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 23:59:17.706325 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 23:59:17.706779 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 23:59:17.729116 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 23:59:17.729631 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 23:59:17.739684 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 23:59:17.743256 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 23:59:17.757573 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 23:59:17.768591 systemd-resolved[2021]: Positive Trust Anchors: Jan 16 23:59:17.768645 systemd-resolved[2021]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 23:59:17.768713 systemd-resolved[2021]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 23:59:17.782313 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 16 23:59:17.788879 systemd-resolved[2021]: Defaulting to hostname 'linux'. Jan 16 23:59:17.793215 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 23:59:17.796575 systemd[1]: Reached target network.target - Network. Jan 16 23:59:17.799063 systemd[1]: Reached target network-online.target - Network is Online. Jan 16 23:59:17.802305 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 23:59:17.805506 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 23:59:17.808639 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 16 23:59:17.812160 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 16 23:59:17.815685 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 16 23:59:17.818687 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 16 23:59:17.821937 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 16 23:59:17.825104 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 16 23:59:17.825415 systemd[1]: Reached target paths.target - Path Units. Jan 16 23:59:17.827732 systemd[1]: Reached target timers.target - Timer Units. Jan 16 23:59:17.831184 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 16 23:59:17.837774 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 16 23:59:17.843266 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 16 23:59:17.850157 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 16 23:59:17.853175 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 23:59:17.855954 systemd[1]: Reached target basic.target - Basic System. Jan 16 23:59:17.859374 systemd[1]: System is tainted: cgroupsv1 Jan 16 23:59:17.859495 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 16 23:59:17.859560 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 16 23:59:17.877229 systemd[1]: Starting containerd.service - containerd container runtime... Jan 16 23:59:17.885174 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 16 23:59:17.903294 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 16 23:59:17.911067 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 16 23:59:17.918111 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 16 23:59:17.922166 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 16 23:59:17.950527 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:59:17.974341 jq[2088]: false Jan 16 23:59:17.975181 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 16 23:59:17.998381 systemd[1]: Started ntpd.service - Network Time Service. Jan 16 23:59:18.015242 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 16 23:59:18.032062 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 16 23:59:18.058490 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 16 23:59:18.072536 dbus-daemon[2087]: [system] SELinux support is enabled Jan 16 23:59:18.081494 dbus-daemon[2087]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1691 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 16 23:59:18.084173 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 16 23:59:18.102909 extend-filesystems[2089]: Found loop4 Jan 16 23:59:18.112105 extend-filesystems[2089]: Found loop5 Jan 16 23:59:18.112105 extend-filesystems[2089]: Found loop6 Jan 16 23:59:18.112105 extend-filesystems[2089]: Found loop7 Jan 16 23:59:18.103132 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 16 23:59:18.121768 extend-filesystems[2089]: Found nvme0n1 Jan 16 23:59:18.121768 extend-filesystems[2089]: Found nvme0n1p1 Jan 16 23:59:18.121768 extend-filesystems[2089]: Found nvme0n1p2 Jan 16 23:59:18.121768 extend-filesystems[2089]: Found nvme0n1p3 Jan 16 23:59:18.121768 extend-filesystems[2089]: Found usr Jan 16 23:59:18.121768 extend-filesystems[2089]: Found nvme0n1p4 Jan 16 23:59:18.121768 extend-filesystems[2089]: Found nvme0n1p6 Jan 16 23:59:18.121768 extend-filesystems[2089]: Found nvme0n1p7 Jan 16 23:59:18.121768 extend-filesystems[2089]: Found nvme0n1p9 Jan 16 23:59:18.121768 extend-filesystems[2089]: Checking size of /dev/nvme0n1p9 Jan 16 23:59:18.170292 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 16 23:59:18.175456 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 16 23:59:18.191070 ntpd[2095]: ntpd 4.2.8p17@1.4004-o Fri Jan 16 21:54:20 UTC 2026 (1): Starting Jan 16 23:59:18.202187 ntpd[2095]: 16 Jan 23:59:18 ntpd[2095]: ntpd 4.2.8p17@1.4004-o Fri Jan 16 21:54:20 UTC 2026 (1): Starting Jan 16 23:59:18.202187 ntpd[2095]: 16 Jan 23:59:18 ntpd[2095]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 16 23:59:18.202187 ntpd[2095]: 16 Jan 23:59:18 ntpd[2095]: ---------------------------------------------------- Jan 16 23:59:18.202187 ntpd[2095]: 16 Jan 23:59:18 ntpd[2095]: ntp-4 is maintained by Network Time Foundation, Jan 16 23:59:18.202187 ntpd[2095]: 16 Jan 23:59:18 ntpd[2095]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 16 23:59:18.202187 ntpd[2095]: 16 Jan 23:59:18 ntpd[2095]: corporation. Support and training for ntp-4 are Jan 16 23:59:18.202187 ntpd[2095]: 16 Jan 23:59:18 ntpd[2095]: available at https://www.nwtime.org/support Jan 16 23:59:18.202187 ntpd[2095]: 16 Jan 23:59:18 ntpd[2095]: ---------------------------------------------------- Jan 16 23:59:18.195214 systemd[1]: Starting update-engine.service - Update Engine... Jan 16 23:59:18.191155 ntpd[2095]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 16 23:59:18.191179 ntpd[2095]: ---------------------------------------------------- Jan 16 23:59:18.220388 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 16 23:59:18.191200 ntpd[2095]: ntp-4 is maintained by Network Time Foundation, Jan 16 23:59:18.191222 ntpd[2095]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 16 23:59:18.191242 ntpd[2095]: corporation. Support and training for ntp-4 are Jan 16 23:59:18.191264 ntpd[2095]: available at https://www.nwtime.org/support Jan 16 23:59:18.191285 ntpd[2095]: ---------------------------------------------------- Jan 16 23:59:18.236126 ntpd[2095]: 16 Jan 23:59:18 ntpd[2095]: proto: precision = 0.108 usec (-23) Jan 16 23:59:18.228790 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 16 23:59:18.231098 ntpd[2095]: proto: precision = 0.108 usec (-23) Jan 16 23:59:18.245266 ntpd[2095]: basedate set to 2026-01-04 Jan 16 23:59:18.271256 ntpd[2095]: 16 Jan 23:59:18 ntpd[2095]: basedate set to 2026-01-04 Jan 16 23:59:18.271256 ntpd[2095]: 16 Jan 23:59:18 ntpd[2095]: gps base set to 2026-01-04 (week 2400) Jan 16 23:59:18.253726 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 16 23:59:18.245348 ntpd[2095]: gps base set to 2026-01-04 (week 2400) Jan 16 23:59:18.254354 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 16 23:59:18.273083 systemd[1]: motdgen.service: Deactivated successfully. Jan 16 23:59:18.273708 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 16 23:59:18.290250 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 16 23:59:18.300451 ntpd[2095]: 16 Jan 23:59:18 ntpd[2095]: Listen and drop on 0 v6wildcard [::]:123 Jan 16 23:59:18.300451 ntpd[2095]: 16 Jan 23:59:18 ntpd[2095]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 16 23:59:18.300451 ntpd[2095]: 16 Jan 23:59:18 ntpd[2095]: Listen normally on 2 lo 127.0.0.1:123 Jan 16 23:59:18.300451 ntpd[2095]: 16 Jan 23:59:18 ntpd[2095]: Listen normally on 3 eth0 172.31.23.167:123 Jan 16 23:59:18.300451 ntpd[2095]: 16 Jan 23:59:18 ntpd[2095]: Listen normally on 4 lo [::1]:123 Jan 16 23:59:18.300451 ntpd[2095]: 16 Jan 23:59:18 ntpd[2095]: Listen normally on 5 eth0 [fe80::4b8:38ff:fe68:7295%2]:123 Jan 16 23:59:18.297131 ntpd[2095]: Listen and drop on 0 v6wildcard [::]:123 Jan 16 23:59:18.297218 ntpd[2095]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 16 23:59:18.308057 extend-filesystems[2089]: Resized partition /dev/nvme0n1p9 Jan 16 23:59:18.317107 ntpd[2095]: 16 Jan 23:59:18 ntpd[2095]: Listening on routing socket on fd #22 for interface updates Jan 16 23:59:18.297557 ntpd[2095]: Listen normally on 2 lo 127.0.0.1:123 Jan 16 23:59:18.317394 jq[2120]: true Jan 16 23:59:18.297632 ntpd[2095]: Listen normally on 3 eth0 172.31.23.167:123 Jan 16 23:59:18.297710 ntpd[2095]: Listen normally on 4 lo [::1]:123 Jan 16 23:59:18.297788 ntpd[2095]: Listen normally on 5 eth0 [fe80::4b8:38ff:fe68:7295%2]:123 Jan 16 23:59:18.305104 ntpd[2095]: Listening on routing socket on fd #22 for interface updates Jan 16 23:59:18.325667 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 16 23:59:18.344266 extend-filesystems[2136]: resize2fs 1.47.1 (20-May-2024) Jan 16 23:59:18.395266 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 16 23:59:18.329514 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 16 23:59:18.395447 ntpd[2095]: 16 Jan 23:59:18 ntpd[2095]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 16 23:59:18.395447 ntpd[2095]: 16 Jan 23:59:18 ntpd[2095]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 16 23:59:18.372223 ntpd[2095]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 16 23:59:18.372284 ntpd[2095]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 16 23:59:18.425574 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 16 23:59:18.425672 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 16 23:59:18.431342 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 16 23:59:18.440465 dbus-daemon[2087]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 16 23:59:18.432953 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 16 23:59:18.465498 (ntainerd)[2150]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 16 23:59:18.486913 update_engine[2114]: I20260116 23:59:18.472244 2114 main.cc:92] Flatcar Update Engine starting Jan 16 23:59:18.476719 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 16 23:59:18.498407 systemd[1]: Started update-engine.service - Update Engine. Jan 16 23:59:18.510078 update_engine[2114]: I20260116 23:59:18.501372 2114 update_check_scheduler.cc:74] Next update check in 6m7s Jan 16 23:59:18.502226 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 16 23:59:18.518056 jq[2138]: true Jan 16 23:59:18.528189 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 16 23:59:18.603778 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 16 23:59:18.603917 coreos-metadata[2085]: Jan 16 23:59:18.593 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 16 23:59:18.603917 coreos-metadata[2085]: Jan 16 23:59:18.599 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 16 23:59:18.612605 tar[2133]: linux-arm64/LICENSE Jan 16 23:59:18.623572 extend-filesystems[2136]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 16 23:59:18.623572 extend-filesystems[2136]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 16 23:59:18.623572 extend-filesystems[2136]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 16 23:59:18.689782 coreos-metadata[2085]: Jan 16 23:59:18.614 INFO Fetch successful Jan 16 23:59:18.689782 coreos-metadata[2085]: Jan 16 23:59:18.614 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 16 23:59:18.689782 coreos-metadata[2085]: Jan 16 23:59:18.615 INFO Fetch successful Jan 16 23:59:18.689782 coreos-metadata[2085]: Jan 16 23:59:18.615 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 16 23:59:18.689782 coreos-metadata[2085]: Jan 16 23:59:18.618 INFO Fetch successful Jan 16 23:59:18.689782 coreos-metadata[2085]: Jan 16 23:59:18.618 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 16 23:59:18.689782 coreos-metadata[2085]: Jan 16 23:59:18.620 INFO Fetch successful Jan 16 23:59:18.689782 coreos-metadata[2085]: Jan 16 23:59:18.620 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 16 23:59:18.689782 coreos-metadata[2085]: Jan 16 23:59:18.635 INFO Fetch failed with 404: resource not found Jan 16 23:59:18.689782 coreos-metadata[2085]: Jan 16 23:59:18.635 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 16 23:59:18.689782 coreos-metadata[2085]: Jan 16 23:59:18.644 INFO Fetch successful Jan 16 23:59:18.689782 coreos-metadata[2085]: Jan 16 23:59:18.644 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 16 23:59:18.689782 coreos-metadata[2085]: Jan 16 23:59:18.649 INFO Fetch successful Jan 16 23:59:18.689782 coreos-metadata[2085]: Jan 16 23:59:18.649 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 16 23:59:18.689782 coreos-metadata[2085]: Jan 16 23:59:18.661 INFO Fetch successful Jan 16 23:59:18.689782 coreos-metadata[2085]: Jan 16 23:59:18.661 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 16 23:59:18.689782 coreos-metadata[2085]: Jan 16 23:59:18.676 INFO Fetch successful Jan 16 23:59:18.689782 coreos-metadata[2085]: Jan 16 23:59:18.676 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 16 23:59:18.689782 coreos-metadata[2085]: Jan 16 23:59:18.684 INFO Fetch successful Jan 16 23:59:18.690802 tar[2133]: linux-arm64/helm Jan 16 23:59:18.629759 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 16 23:59:18.709096 extend-filesystems[2089]: Resized filesystem in /dev/nvme0n1p9 Jan 16 23:59:18.630326 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 16 23:59:18.701840 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 16 23:59:18.723726 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 16 23:59:18.815856 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 16 23:59:18.888701 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 16 23:59:18.892663 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 16 23:59:18.950647 bash[2199]: Updated "/home/core/.ssh/authorized_keys" Jan 16 23:59:18.955708 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 16 23:59:18.973663 systemd[1]: Starting sshkeys.service... Jan 16 23:59:19.005052 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 16 23:59:19.026670 amazon-ssm-agent[2175]: Initializing new seelog logger Jan 16 23:59:19.062301 amazon-ssm-agent[2175]: New Seelog Logger Creation Complete Jan 16 23:59:19.062301 amazon-ssm-agent[2175]: 2026/01/16 23:59:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 16 23:59:19.062301 amazon-ssm-agent[2175]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 16 23:59:19.062301 amazon-ssm-agent[2175]: 2026/01/16 23:59:19 processing appconfig overrides Jan 16 23:59:19.062301 amazon-ssm-agent[2175]: 2026/01/16 23:59:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 16 23:59:19.062301 amazon-ssm-agent[2175]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 16 23:59:19.062301 amazon-ssm-agent[2175]: 2026-01-16 23:59:19 INFO Proxy environment variables: Jan 16 23:59:19.062301 amazon-ssm-agent[2175]: 2026/01/16 23:59:19 processing appconfig overrides Jan 16 23:59:19.060576 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 16 23:59:19.072908 amazon-ssm-agent[2175]: 2026/01/16 23:59:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 16 23:59:19.072908 amazon-ssm-agent[2175]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 16 23:59:19.072908 amazon-ssm-agent[2175]: 2026/01/16 23:59:19 processing appconfig overrides Jan 16 23:59:19.090853 amazon-ssm-agent[2175]: 2026/01/16 23:59:19 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 16 23:59:19.090853 amazon-ssm-agent[2175]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 16 23:59:19.091091 amazon-ssm-agent[2175]: 2026/01/16 23:59:19 processing appconfig overrides Jan 16 23:59:19.151328 amazon-ssm-agent[2175]: 2026-01-16 23:59:19 INFO https_proxy: Jan 16 23:59:19.247999 systemd-logind[2113]: Watching system buttons on /dev/input/event0 (Power Button) Jan 16 23:59:19.248068 systemd-logind[2113]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 16 23:59:19.260869 amazon-ssm-agent[2175]: 2026-01-16 23:59:19 INFO http_proxy: Jan 16 23:59:19.256326 systemd-logind[2113]: New seat seat0. Jan 16 23:59:19.269365 locksmithd[2153]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 16 23:59:19.277631 systemd[1]: Started systemd-logind.service - User Login Management. Jan 16 23:59:19.290001 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (2205) Jan 16 23:59:19.354033 amazon-ssm-agent[2175]: 2026-01-16 23:59:19 INFO no_proxy: Jan 16 23:59:19.442727 containerd[2150]: time="2026-01-16T23:59:19.439382496Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 16 23:59:19.455392 amazon-ssm-agent[2175]: 2026-01-16 23:59:19 INFO Checking if agent identity type OnPrem can be assumed Jan 16 23:59:19.466851 coreos-metadata[2218]: Jan 16 23:59:19.464 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 16 23:59:19.476048 coreos-metadata[2218]: Jan 16 23:59:19.475 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 16 23:59:19.476745 coreos-metadata[2218]: Jan 16 23:59:19.476 INFO Fetch successful Jan 16 23:59:19.476936 coreos-metadata[2218]: Jan 16 23:59:19.476 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 16 23:59:19.490690 coreos-metadata[2218]: Jan 16 23:59:19.488 INFO Fetch successful Jan 16 23:59:19.496559 unknown[2218]: wrote ssh authorized keys file for user: core Jan 16 23:59:19.561551 amazon-ssm-agent[2175]: 2026-01-16 23:59:19 INFO Checking if agent identity type EC2 can be assumed Jan 16 23:59:19.608741 dbus-daemon[2087]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 16 23:59:19.615055 dbus-daemon[2087]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2152 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 16 23:59:19.619759 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 16 23:59:19.636437 systemd[1]: Starting polkit.service - Authorization Manager... Jan 16 23:59:19.673796 update-ssh-keys[2278]: Updated "/home/core/.ssh/authorized_keys" Jan 16 23:59:19.688552 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 16 23:59:19.711870 amazon-ssm-agent[2175]: 2026-01-16 23:59:19 INFO Agent will take identity from EC2 Jan 16 23:59:19.719098 systemd[1]: Finished sshkeys.service. Jan 16 23:59:19.742169 containerd[2150]: time="2026-01-16T23:59:19.740026489Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 16 23:59:19.776455 containerd[2150]: time="2026-01-16T23:59:19.773961205Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 16 23:59:19.776455 containerd[2150]: time="2026-01-16T23:59:19.774028861Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 16 23:59:19.776455 containerd[2150]: time="2026-01-16T23:59:19.774064441Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 16 23:59:19.776455 containerd[2150]: time="2026-01-16T23:59:19.774369361Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 16 23:59:19.776455 containerd[2150]: time="2026-01-16T23:59:19.774402601Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 16 23:59:19.776455 containerd[2150]: time="2026-01-16T23:59:19.774518161Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 23:59:19.776455 containerd[2150]: time="2026-01-16T23:59:19.774546277Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 16 23:59:19.776455 containerd[2150]: time="2026-01-16T23:59:19.774923053Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 23:59:19.776455 containerd[2150]: time="2026-01-16T23:59:19.774956917Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 16 23:59:19.776455 containerd[2150]: time="2026-01-16T23:59:19.774988129Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 23:59:19.776455 containerd[2150]: time="2026-01-16T23:59:19.775018069Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 16 23:59:19.777186 containerd[2150]: time="2026-01-16T23:59:19.775198345Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 16 23:59:19.777186 containerd[2150]: time="2026-01-16T23:59:19.775585525Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 16 23:59:19.785641 containerd[2150]: time="2026-01-16T23:59:19.785562841Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 16 23:59:19.796583 containerd[2150]: time="2026-01-16T23:59:19.788138941Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 16 23:59:19.796583 containerd[2150]: time="2026-01-16T23:59:19.794518969Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 16 23:59:19.797964 containerd[2150]: time="2026-01-16T23:59:19.797310733Z" level=info msg="metadata content store policy set" policy=shared Jan 16 23:59:19.812889 containerd[2150]: time="2026-01-16T23:59:19.809861137Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 16 23:59:19.812889 containerd[2150]: time="2026-01-16T23:59:19.811965925Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 16 23:59:19.812889 containerd[2150]: time="2026-01-16T23:59:19.812037889Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 16 23:59:19.812889 containerd[2150]: time="2026-01-16T23:59:19.812075605Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 16 23:59:19.812889 containerd[2150]: time="2026-01-16T23:59:19.812136085Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 16 23:59:19.812889 containerd[2150]: time="2026-01-16T23:59:19.812480137Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 16 23:59:19.813404 amazon-ssm-agent[2175]: 2026-01-16 23:59:19 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 16 23:59:19.821849 containerd[2150]: time="2026-01-16T23:59:19.821408198Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 16 23:59:19.821849 containerd[2150]: time="2026-01-16T23:59:19.821703614Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 16 23:59:19.821849 containerd[2150]: time="2026-01-16T23:59:19.821744090Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 16 23:59:19.821849 containerd[2150]: time="2026-01-16T23:59:19.821776994Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 16 23:59:19.825729 containerd[2150]: time="2026-01-16T23:59:19.821809166Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 16 23:59:19.825729 containerd[2150]: time="2026-01-16T23:59:19.824468282Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 16 23:59:19.825729 containerd[2150]: time="2026-01-16T23:59:19.824530358Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 16 23:59:19.825729 containerd[2150]: time="2026-01-16T23:59:19.824565998Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 16 23:59:19.825729 containerd[2150]: time="2026-01-16T23:59:19.824631278Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 16 23:59:19.825729 containerd[2150]: time="2026-01-16T23:59:19.825468338Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 16 23:59:19.825729 containerd[2150]: time="2026-01-16T23:59:19.825558122Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 16 23:59:19.825729 containerd[2150]: time="2026-01-16T23:59:19.825590630Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 16 23:59:19.828638 containerd[2150]: time="2026-01-16T23:59:19.827896610Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 16 23:59:19.828638 containerd[2150]: time="2026-01-16T23:59:19.827985410Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 16 23:59:19.828638 containerd[2150]: time="2026-01-16T23:59:19.828043238Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 16 23:59:19.828638 containerd[2150]: time="2026-01-16T23:59:19.828079022Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 16 23:59:19.828638 containerd[2150]: time="2026-01-16T23:59:19.828136958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 16 23:59:19.828638 containerd[2150]: time="2026-01-16T23:59:19.828170714Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 16 23:59:19.828638 containerd[2150]: time="2026-01-16T23:59:19.828228014Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 16 23:59:19.828638 containerd[2150]: time="2026-01-16T23:59:19.828286958Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 16 23:59:19.828638 containerd[2150]: time="2026-01-16T23:59:19.828338690Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 16 23:59:19.828638 containerd[2150]: time="2026-01-16T23:59:19.828419234Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 16 23:59:19.828638 containerd[2150]: time="2026-01-16T23:59:19.828485750Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 16 23:59:19.828638 containerd[2150]: time="2026-01-16T23:59:19.828518438Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 16 23:59:19.828638 containerd[2150]: time="2026-01-16T23:59:19.828576902Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 16 23:59:19.829501 containerd[2150]: time="2026-01-16T23:59:19.828613670Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 16 23:59:19.831723 containerd[2150]: time="2026-01-16T23:59:19.830049782Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 16 23:59:19.831723 containerd[2150]: time="2026-01-16T23:59:19.830148854Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 16 23:59:19.831723 containerd[2150]: time="2026-01-16T23:59:19.830182154Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 16 23:59:19.832835 containerd[2150]: time="2026-01-16T23:59:19.830966210Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 16 23:59:19.832835 containerd[2150]: time="2026-01-16T23:59:19.832588502Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 16 23:59:19.832835 containerd[2150]: time="2026-01-16T23:59:19.832627298Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 16 23:59:19.832835 containerd[2150]: time="2026-01-16T23:59:19.832685330Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 16 23:59:19.832835 containerd[2150]: time="2026-01-16T23:59:19.832719206Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 16 23:59:19.832835 containerd[2150]: time="2026-01-16T23:59:19.832779386Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 16 23:59:19.834753 containerd[2150]: time="2026-01-16T23:59:19.832805378Z" level=info msg="NRI interface is disabled by configuration." Jan 16 23:59:19.834753 containerd[2150]: time="2026-01-16T23:59:19.833225390Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 16 23:59:19.836722 containerd[2150]: time="2026-01-16T23:59:19.835962470Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:false] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:false SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 16 23:59:19.838671 containerd[2150]: time="2026-01-16T23:59:19.836962766Z" level=info msg="Connect containerd service" Jan 16 23:59:19.839927 containerd[2150]: time="2026-01-16T23:59:19.837080510Z" level=info msg="using legacy CRI server" Jan 16 23:59:19.839927 containerd[2150]: time="2026-01-16T23:59:19.838891574Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 16 23:59:19.839927 containerd[2150]: time="2026-01-16T23:59:19.839293406Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 16 23:59:19.844957 polkitd[2291]: Started polkitd version 121 Jan 16 23:59:19.858425 containerd[2150]: time="2026-01-16T23:59:19.854899358Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 23:59:19.858425 containerd[2150]: time="2026-01-16T23:59:19.856664738Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 16 23:59:19.858425 containerd[2150]: time="2026-01-16T23:59:19.856775594Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 16 23:59:19.858740 containerd[2150]: time="2026-01-16T23:59:19.858671210Z" level=info msg="Start subscribing containerd event" Jan 16 23:59:19.858892 containerd[2150]: time="2026-01-16T23:59:19.858864326Z" level=info msg="Start recovering state" Jan 16 23:59:19.859215 containerd[2150]: time="2026-01-16T23:59:19.859181750Z" level=info msg="Start event monitor" Jan 16 23:59:19.859341 containerd[2150]: time="2026-01-16T23:59:19.859314278Z" level=info msg="Start snapshots syncer" Jan 16 23:59:19.861410 containerd[2150]: time="2026-01-16T23:59:19.860768666Z" level=info msg="Start cni network conf syncer for default" Jan 16 23:59:19.861410 containerd[2150]: time="2026-01-16T23:59:19.860859110Z" level=info msg="Start streaming server" Jan 16 23:59:19.867405 containerd[2150]: time="2026-01-16T23:59:19.865050710Z" level=info msg="containerd successfully booted in 0.435167s" Jan 16 23:59:19.865207 systemd[1]: Started containerd.service - containerd container runtime. Jan 16 23:59:19.907434 polkitd[2291]: Loading rules from directory /etc/polkit-1/rules.d Jan 16 23:59:19.907564 polkitd[2291]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 16 23:59:19.913852 amazon-ssm-agent[2175]: 2026-01-16 23:59:19 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 16 23:59:19.917305 polkitd[2291]: Finished loading, compiling and executing 2 rules Jan 16 23:59:19.936739 dbus-daemon[2087]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 16 23:59:19.937042 systemd[1]: Started polkit.service - Authorization Manager. Jan 16 23:59:19.943598 polkitd[2291]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 16 23:59:20.013383 amazon-ssm-agent[2175]: 2026-01-16 23:59:19 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 16 23:59:20.049937 systemd-hostnamed[2152]: Hostname set to (transient) Jan 16 23:59:20.051900 systemd-resolved[2021]: System hostname changed to 'ip-172-31-23-167'. Jan 16 23:59:20.117704 amazon-ssm-agent[2175]: 2026-01-16 23:59:19 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 16 23:59:20.214901 amazon-ssm-agent[2175]: 2026-01-16 23:59:19 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 16 23:59:20.314983 amazon-ssm-agent[2175]: 2026-01-16 23:59:19 INFO [amazon-ssm-agent] Starting Core Agent Jan 16 23:59:20.321305 sshd_keygen[2137]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 16 23:59:20.417401 amazon-ssm-agent[2175]: 2026-01-16 23:59:19 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 16 23:59:20.456720 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 16 23:59:20.478310 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 16 23:59:20.486350 systemd[1]: Started sshd@0-172.31.23.167:22-68.220.241.50:36696.service - OpenSSH per-connection server daemon (68.220.241.50:36696). Jan 16 23:59:20.516225 amazon-ssm-agent[2175]: 2026-01-16 23:59:19 INFO [Registrar] Starting registrar module Jan 16 23:59:20.534782 systemd[1]: issuegen.service: Deactivated successfully. Jan 16 23:59:20.535364 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 16 23:59:20.553618 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 16 23:59:20.579923 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 16 23:59:20.591081 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 16 23:59:20.606345 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 16 23:59:20.615870 systemd[1]: Reached target getty.target - Login Prompts. Jan 16 23:59:20.618724 amazon-ssm-agent[2175]: 2026-01-16 23:59:19 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 16 23:59:20.836139 tar[2133]: linux-arm64/README.md Jan 16 23:59:20.865672 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 16 23:59:21.093010 sshd[2359]: Accepted publickey for core from 68.220.241.50 port 36696 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 16 23:59:21.099411 sshd[2359]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:21.121252 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 16 23:59:21.132395 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 16 23:59:21.145871 systemd-logind[2113]: New session 1 of user core. Jan 16 23:59:21.186260 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 16 23:59:21.207019 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 16 23:59:21.228971 (systemd)[2381]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 16 23:59:21.499198 systemd[2381]: Queued start job for default target default.target. Jan 16 23:59:21.500614 systemd[2381]: Created slice app.slice - User Application Slice. Jan 16 23:59:21.500664 systemd[2381]: Reached target paths.target - Paths. Jan 16 23:59:21.500697 systemd[2381]: Reached target timers.target - Timers. Jan 16 23:59:21.512016 systemd[2381]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 16 23:59:21.546363 systemd[2381]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 16 23:59:21.546476 systemd[2381]: Reached target sockets.target - Sockets. Jan 16 23:59:21.546509 systemd[2381]: Reached target basic.target - Basic System. Jan 16 23:59:21.547965 systemd[2381]: Reached target default.target - Main User Target. Jan 16 23:59:21.548074 systemd[2381]: Startup finished in 300ms. Jan 16 23:59:21.548651 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 16 23:59:21.559593 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 16 23:59:21.893464 amazon-ssm-agent[2175]: 2026-01-16 23:59:21 INFO [EC2Identity] EC2 registration was successful. Jan 16 23:59:21.929106 amazon-ssm-agent[2175]: 2026-01-16 23:59:21 INFO [CredentialRefresher] credentialRefresher has started Jan 16 23:59:21.929106 amazon-ssm-agent[2175]: 2026-01-16 23:59:21 INFO [CredentialRefresher] Starting credentials refresher loop Jan 16 23:59:21.929300 amazon-ssm-agent[2175]: 2026-01-16 23:59:21 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 16 23:59:21.969531 systemd[1]: Started sshd@1-172.31.23.167:22-68.220.241.50:36704.service - OpenSSH per-connection server daemon (68.220.241.50:36704). Jan 16 23:59:21.994627 amazon-ssm-agent[2175]: 2026-01-16 23:59:21 INFO [CredentialRefresher] Next credential rotation will be in 30.7499920529 minutes Jan 16 23:59:22.211277 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:59:22.215488 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 16 23:59:22.220889 systemd[1]: Startup finished in 9.439s (kernel) + 11.236s (userspace) = 20.676s. Jan 16 23:59:22.221914 (kubelet)[2403]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 23:59:22.517029 sshd[2393]: Accepted publickey for core from 68.220.241.50 port 36704 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 16 23:59:22.517880 sshd[2393]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:22.526441 systemd-logind[2113]: New session 2 of user core. Jan 16 23:59:22.532373 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 16 23:59:22.902505 sshd[2393]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:22.909936 systemd[1]: sshd@1-172.31.23.167:22-68.220.241.50:36704.service: Deactivated successfully. Jan 16 23:59:22.918098 systemd[1]: session-2.scope: Deactivated successfully. Jan 16 23:59:22.921446 systemd-logind[2113]: Session 2 logged out. Waiting for processes to exit. Jan 16 23:59:22.924938 systemd-logind[2113]: Removed session 2. Jan 16 23:59:22.959728 amazon-ssm-agent[2175]: 2026-01-16 23:59:22 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 16 23:59:22.999328 systemd[1]: Started sshd@2-172.31.23.167:22-68.220.241.50:47808.service - OpenSSH per-connection server daemon (68.220.241.50:47808). Jan 16 23:59:23.061492 amazon-ssm-agent[2175]: 2026-01-16 23:59:22 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2419) started Jan 16 23:59:23.162198 amazon-ssm-agent[2175]: 2026-01-16 23:59:22 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 16 23:59:23.556692 sshd[2420]: Accepted publickey for core from 68.220.241.50 port 47808 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 16 23:59:23.559246 sshd[2420]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:23.567731 systemd-logind[2113]: New session 3 of user core. Jan 16 23:59:23.571419 kubelet[2403]: E0116 23:59:23.570349 2403 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 23:59:23.576379 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 16 23:59:23.576909 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 23:59:23.577263 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 23:59:23.933170 sshd[2420]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:23.940775 systemd[1]: sshd@2-172.31.23.167:22-68.220.241.50:47808.service: Deactivated successfully. Jan 16 23:59:23.946381 systemd[1]: session-3.scope: Deactivated successfully. Jan 16 23:59:23.947115 systemd-logind[2113]: Session 3 logged out. Waiting for processes to exit. Jan 16 23:59:23.949724 systemd-logind[2113]: Removed session 3. Jan 16 23:59:24.024335 systemd[1]: Started sshd@3-172.31.23.167:22-68.220.241.50:47812.service - OpenSSH per-connection server daemon (68.220.241.50:47812). Jan 16 23:59:24.572559 sshd[2440]: Accepted publickey for core from 68.220.241.50 port 47812 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 16 23:59:24.575518 sshd[2440]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:24.584001 systemd-logind[2113]: New session 4 of user core. Jan 16 23:59:24.593570 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 16 23:59:24.959162 sshd[2440]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:24.965646 systemd[1]: sshd@3-172.31.23.167:22-68.220.241.50:47812.service: Deactivated successfully. Jan 16 23:59:24.971323 systemd-logind[2113]: Session 4 logged out. Waiting for processes to exit. Jan 16 23:59:24.972602 systemd[1]: session-4.scope: Deactivated successfully. Jan 16 23:59:24.974994 systemd-logind[2113]: Removed session 4. Jan 16 23:59:25.049307 systemd[1]: Started sshd@4-172.31.23.167:22-68.220.241.50:47824.service - OpenSSH per-connection server daemon (68.220.241.50:47824). Jan 16 23:59:24.969772 systemd-resolved[2021]: Clock change detected. Flushing caches. Jan 16 23:59:24.988056 systemd-journald[1604]: Time jumped backwards, rotating. Jan 16 23:59:25.363408 sshd[2448]: Accepted publickey for core from 68.220.241.50 port 47824 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 16 23:59:25.366111 sshd[2448]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:25.374855 systemd-logind[2113]: New session 5 of user core. Jan 16 23:59:25.386956 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 16 23:59:25.680341 sudo[2453]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 16 23:59:25.681064 sudo[2453]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 23:59:25.697616 sudo[2453]: pam_unix(sudo:session): session closed for user root Jan 16 23:59:25.783923 sshd[2448]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:25.792238 systemd[1]: sshd@4-172.31.23.167:22-68.220.241.50:47824.service: Deactivated successfully. Jan 16 23:59:25.798090 systemd[1]: session-5.scope: Deactivated successfully. Jan 16 23:59:25.799983 systemd-logind[2113]: Session 5 logged out. Waiting for processes to exit. Jan 16 23:59:25.802254 systemd-logind[2113]: Removed session 5. Jan 16 23:59:25.874956 systemd[1]: Started sshd@5-172.31.23.167:22-68.220.241.50:47826.service - OpenSSH per-connection server daemon (68.220.241.50:47826). Jan 16 23:59:26.422912 sshd[2458]: Accepted publickey for core from 68.220.241.50 port 47826 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 16 23:59:26.425626 sshd[2458]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:26.434109 systemd-logind[2113]: New session 6 of user core. Jan 16 23:59:26.440936 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 16 23:59:26.723364 sudo[2463]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 16 23:59:26.724024 sudo[2463]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 23:59:26.730155 sudo[2463]: pam_unix(sudo:session): session closed for user root Jan 16 23:59:26.739887 sudo[2462]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 16 23:59:26.740592 sudo[2462]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 23:59:26.766028 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 16 23:59:26.771159 auditctl[2466]: No rules Jan 16 23:59:26.772326 systemd[1]: audit-rules.service: Deactivated successfully. Jan 16 23:59:26.772974 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 16 23:59:26.784213 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 16 23:59:26.833788 augenrules[2485]: No rules Jan 16 23:59:26.837333 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 16 23:59:26.841527 sudo[2462]: pam_unix(sudo:session): session closed for user root Jan 16 23:59:26.926806 sshd[2458]: pam_unix(sshd:session): session closed for user core Jan 16 23:59:26.932082 systemd[1]: sshd@5-172.31.23.167:22-68.220.241.50:47826.service: Deactivated successfully. Jan 16 23:59:26.938029 systemd[1]: session-6.scope: Deactivated successfully. Jan 16 23:59:26.938499 systemd-logind[2113]: Session 6 logged out. Waiting for processes to exit. Jan 16 23:59:26.941357 systemd-logind[2113]: Removed session 6. Jan 16 23:59:27.018919 systemd[1]: Started sshd@6-172.31.23.167:22-68.220.241.50:47832.service - OpenSSH per-connection server daemon (68.220.241.50:47832). Jan 16 23:59:27.559589 sshd[2494]: Accepted publickey for core from 68.220.241.50 port 47832 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 16 23:59:27.562381 sshd[2494]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 23:59:27.571271 systemd-logind[2113]: New session 7 of user core. Jan 16 23:59:27.577964 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 16 23:59:27.856624 sudo[2498]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 16 23:59:27.857717 sudo[2498]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 23:59:28.358893 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 16 23:59:28.372133 (dockerd)[2514]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 16 23:59:28.797648 dockerd[2514]: time="2026-01-16T23:59:28.796106403Z" level=info msg="Starting up" Jan 16 23:59:29.199937 systemd[1]: var-lib-docker-metacopy\x2dcheck2602067154-merged.mount: Deactivated successfully. Jan 16 23:59:29.216421 dockerd[2514]: time="2026-01-16T23:59:29.216069517Z" level=info msg="Loading containers: start." Jan 16 23:59:29.381482 kernel: Initializing XFRM netlink socket Jan 16 23:59:29.414267 (udev-worker)[2537]: Network interface NamePolicy= disabled on kernel command line. Jan 16 23:59:29.513711 systemd-networkd[1691]: docker0: Link UP Jan 16 23:59:29.534906 dockerd[2514]: time="2026-01-16T23:59:29.534828290Z" level=info msg="Loading containers: done." Jan 16 23:59:29.561947 dockerd[2514]: time="2026-01-16T23:59:29.561814154Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 16 23:59:29.562473 dockerd[2514]: time="2026-01-16T23:59:29.562216778Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 16 23:59:29.562594 dockerd[2514]: time="2026-01-16T23:59:29.562566938Z" level=info msg="Daemon has completed initialization" Jan 16 23:59:29.626484 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 16 23:59:29.628429 dockerd[2514]: time="2026-01-16T23:59:29.626294331Z" level=info msg="API listen on /run/docker.sock" Jan 16 23:59:31.427016 containerd[2150]: time="2026-01-16T23:59:31.426490096Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 16 23:59:32.075218 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3620649318.mount: Deactivated successfully. Jan 16 23:59:33.424576 containerd[2150]: time="2026-01-16T23:59:33.424495385Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:33.427355 containerd[2150]: time="2026-01-16T23:59:33.427298562Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=26441982" Jan 16 23:59:33.428680 containerd[2150]: time="2026-01-16T23:59:33.428624502Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:33.434849 containerd[2150]: time="2026-01-16T23:59:33.434764830Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:33.437331 containerd[2150]: time="2026-01-16T23:59:33.437284602Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 2.010736918s" Jan 16 23:59:33.439384 containerd[2150]: time="2026-01-16T23:59:33.437512014Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 16 23:59:33.439826 containerd[2150]: time="2026-01-16T23:59:33.439786890Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 16 23:59:33.591367 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 16 23:59:33.603949 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:59:33.960856 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:59:33.980044 (kubelet)[2722]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 23:59:34.061068 kubelet[2722]: E0116 23:59:34.059995 2722 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 23:59:34.069732 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 23:59:34.070154 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 23:59:34.832679 containerd[2150]: time="2026-01-16T23:59:34.832602068Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:34.835364 containerd[2150]: time="2026-01-16T23:59:34.834887457Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22622086" Jan 16 23:59:34.836618 containerd[2150]: time="2026-01-16T23:59:34.836561781Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:34.842480 containerd[2150]: time="2026-01-16T23:59:34.842316405Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:34.844846 containerd[2150]: time="2026-01-16T23:59:34.844797273Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.404739519s" Jan 16 23:59:34.845168 containerd[2150]: time="2026-01-16T23:59:34.845000469Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 16 23:59:34.845989 containerd[2150]: time="2026-01-16T23:59:34.845711109Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 16 23:59:36.079845 containerd[2150]: time="2026-01-16T23:59:36.079791775Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:36.081808 containerd[2150]: time="2026-01-16T23:59:36.081570919Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17616747" Jan 16 23:59:36.082859 containerd[2150]: time="2026-01-16T23:59:36.082816075Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:36.090222 containerd[2150]: time="2026-01-16T23:59:36.090142195Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:36.092782 containerd[2150]: time="2026-01-16T23:59:36.092719495Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.24693947s" Jan 16 23:59:36.092908 containerd[2150]: time="2026-01-16T23:59:36.092780455Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 16 23:59:36.093603 containerd[2150]: time="2026-01-16T23:59:36.093542575Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 16 23:59:37.301050 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1374983519.mount: Deactivated successfully. Jan 16 23:59:37.900209 containerd[2150]: time="2026-01-16T23:59:37.900144996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:37.901921 containerd[2150]: time="2026-01-16T23:59:37.901632192Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=27558724" Jan 16 23:59:37.903708 containerd[2150]: time="2026-01-16T23:59:37.903226908Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:37.908638 containerd[2150]: time="2026-01-16T23:59:37.908562036Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.814952141s" Jan 16 23:59:37.908638 containerd[2150]: time="2026-01-16T23:59:37.908626680Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 16 23:59:37.909969 containerd[2150]: time="2026-01-16T23:59:37.908766840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:37.909969 containerd[2150]: time="2026-01-16T23:59:37.909628884Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 16 23:59:38.446837 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2872457645.mount: Deactivated successfully. Jan 16 23:59:39.655485 containerd[2150]: time="2026-01-16T23:59:39.655052016Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:39.657353 containerd[2150]: time="2026-01-16T23:59:39.657297228Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=16951622" Jan 16 23:59:39.658961 containerd[2150]: time="2026-01-16T23:59:39.658032192Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:39.664168 containerd[2150]: time="2026-01-16T23:59:39.664109172Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:39.666715 containerd[2150]: time="2026-01-16T23:59:39.666665101Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.756984521s" Jan 16 23:59:39.666878 containerd[2150]: time="2026-01-16T23:59:39.666846865Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 16 23:59:39.667848 containerd[2150]: time="2026-01-16T23:59:39.667795249Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 16 23:59:40.132228 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4078751564.mount: Deactivated successfully. Jan 16 23:59:40.140865 containerd[2150]: time="2026-01-16T23:59:40.140776331Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:40.142421 containerd[2150]: time="2026-01-16T23:59:40.142366943Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 16 23:59:40.143490 containerd[2150]: time="2026-01-16T23:59:40.143333831Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:40.148487 containerd[2150]: time="2026-01-16T23:59:40.147393635Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:40.149701 containerd[2150]: time="2026-01-16T23:59:40.149194391Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 481.342562ms" Jan 16 23:59:40.149701 containerd[2150]: time="2026-01-16T23:59:40.149252843Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 16 23:59:40.150176 containerd[2150]: time="2026-01-16T23:59:40.150137819Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 16 23:59:40.690382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3240273355.mount: Deactivated successfully. Jan 16 23:59:43.540514 containerd[2150]: time="2026-01-16T23:59:43.540001636Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:43.542484 containerd[2150]: time="2026-01-16T23:59:43.542395576Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=67943165" Jan 16 23:59:43.546470 containerd[2150]: time="2026-01-16T23:59:43.546138220Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:43.554121 containerd[2150]: time="2026-01-16T23:59:43.554065996Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 23:59:43.556736 containerd[2150]: time="2026-01-16T23:59:43.556674712Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 3.405675089s" Jan 16 23:59:43.556887 containerd[2150]: time="2026-01-16T23:59:43.556735144Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 16 23:59:44.074587 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 16 23:59:44.087897 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:59:44.480724 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:59:44.494163 (kubelet)[2892]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 23:59:44.572492 kubelet[2892]: E0116 23:59:44.572394 2892 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 23:59:44.581604 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 23:59:44.581997 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 23:59:49.849333 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 16 23:59:50.612118 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:59:50.619974 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:59:50.684614 systemd[1]: Reloading requested from client PID 2912 ('systemctl') (unit session-7.scope)... Jan 16 23:59:50.684813 systemd[1]: Reloading... Jan 16 23:59:50.897486 zram_generator::config[2952]: No configuration found. Jan 16 23:59:51.155762 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 16 23:59:51.324531 systemd[1]: Reloading finished in 638 ms. Jan 16 23:59:51.401300 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 16 23:59:51.401773 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 16 23:59:51.402787 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:59:51.418981 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 23:59:51.743801 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 23:59:51.761151 (kubelet)[3027]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 23:59:51.832465 kubelet[3027]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 23:59:51.834478 kubelet[3027]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 16 23:59:51.834478 kubelet[3027]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 23:59:51.834478 kubelet[3027]: I0116 23:59:51.833293 3027 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 23:59:53.411691 kubelet[3027]: I0116 23:59:53.411631 3027 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 16 23:59:53.412375 kubelet[3027]: I0116 23:59:53.412344 3027 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 23:59:53.413376 kubelet[3027]: I0116 23:59:53.413331 3027 server.go:954] "Client rotation is on, will bootstrap in background" Jan 16 23:59:53.467700 kubelet[3027]: E0116 23:59:53.467630 3027 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://172.31.23.167:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.23.167:6443: connect: connection refused" logger="UnhandledError" Jan 16 23:59:53.470057 kubelet[3027]: I0116 23:59:53.469777 3027 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 23:59:53.480564 kubelet[3027]: E0116 23:59:53.478814 3027 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 16 23:59:53.480564 kubelet[3027]: I0116 23:59:53.478864 3027 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 16 23:59:53.484932 kubelet[3027]: I0116 23:59:53.484884 3027 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 23:59:53.486886 kubelet[3027]: I0116 23:59:53.486811 3027 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 23:59:53.487171 kubelet[3027]: I0116 23:59:53.486876 3027 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-167","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 16 23:59:53.487360 kubelet[3027]: I0116 23:59:53.487314 3027 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 23:59:53.487360 kubelet[3027]: I0116 23:59:53.487337 3027 container_manager_linux.go:304] "Creating device plugin manager" Jan 16 23:59:53.487751 kubelet[3027]: I0116 23:59:53.487711 3027 state_mem.go:36] "Initialized new in-memory state store" Jan 16 23:59:53.493767 kubelet[3027]: I0116 23:59:53.493608 3027 kubelet.go:446] "Attempting to sync node with API server" Jan 16 23:59:53.493767 kubelet[3027]: I0116 23:59:53.493652 3027 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 23:59:53.493767 kubelet[3027]: I0116 23:59:53.493693 3027 kubelet.go:352] "Adding apiserver pod source" Jan 16 23:59:53.493767 kubelet[3027]: I0116 23:59:53.493713 3027 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 23:59:53.500406 kubelet[3027]: W0116 23:59:53.500074 3027 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.167:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-167&limit=500&resourceVersion=0": dial tcp 172.31.23.167:6443: connect: connection refused Jan 16 23:59:53.500406 kubelet[3027]: E0116 23:59:53.500174 3027 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.23.167:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-167&limit=500&resourceVersion=0\": dial tcp 172.31.23.167:6443: connect: connection refused" logger="UnhandledError" Jan 16 23:59:53.502872 kubelet[3027]: W0116 23:59:53.502158 3027 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.167:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.23.167:6443: connect: connection refused Jan 16 23:59:53.502872 kubelet[3027]: E0116 23:59:53.502242 3027 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.23.167:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.167:6443: connect: connection refused" logger="UnhandledError" Jan 16 23:59:53.502872 kubelet[3027]: I0116 23:59:53.502379 3027 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 16 23:59:53.504376 kubelet[3027]: I0116 23:59:53.504343 3027 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 16 23:59:53.504747 kubelet[3027]: W0116 23:59:53.504726 3027 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 16 23:59:53.509765 kubelet[3027]: I0116 23:59:53.509703 3027 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 16 23:59:53.510109 kubelet[3027]: I0116 23:59:53.509987 3027 server.go:1287] "Started kubelet" Jan 16 23:59:53.514336 kubelet[3027]: I0116 23:59:53.514178 3027 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 23:59:53.518476 kubelet[3027]: I0116 23:59:53.516983 3027 server.go:479] "Adding debug handlers to kubelet server" Jan 16 23:59:53.523017 kubelet[3027]: I0116 23:59:53.522921 3027 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 23:59:53.523639 kubelet[3027]: I0116 23:59:53.523608 3027 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 23:59:53.527470 kubelet[3027]: I0116 23:59:53.527404 3027 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 23:59:53.530756 kubelet[3027]: E0116 23:59:53.530234 3027 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.23.167:6443/api/v1/namespaces/default/events\": dial tcp 172.31.23.167:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-23-167.188b5b9980139601 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-23-167,UID:ip-172-31-23-167,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-23-167,},FirstTimestamp:2026-01-16 23:59:53.509942785 +0000 UTC m=+1.742626689,LastTimestamp:2026-01-16 23:59:53.509942785 +0000 UTC m=+1.742626689,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-23-167,}" Jan 16 23:59:53.531888 kubelet[3027]: I0116 23:59:53.531829 3027 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 16 23:59:53.535686 kubelet[3027]: E0116 23:59:53.535622 3027 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-167\" not found" Jan 16 23:59:53.535800 kubelet[3027]: I0116 23:59:53.535714 3027 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 16 23:59:53.536110 kubelet[3027]: I0116 23:59:53.536064 3027 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 16 23:59:53.536209 kubelet[3027]: I0116 23:59:53.536176 3027 reconciler.go:26] "Reconciler: start to sync state" Jan 16 23:59:53.538306 kubelet[3027]: W0116 23:59:53.538196 3027 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.167:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.167:6443: connect: connection refused Jan 16 23:59:53.538420 kubelet[3027]: E0116 23:59:53.538320 3027 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.23.167:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.167:6443: connect: connection refused" logger="UnhandledError" Jan 16 23:59:53.538849 kubelet[3027]: I0116 23:59:53.538801 3027 factory.go:221] Registration of the systemd container factory successfully Jan 16 23:59:53.538965 kubelet[3027]: I0116 23:59:53.538926 3027 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 23:59:53.541507 kubelet[3027]: E0116 23:59:53.541213 3027 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.167:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-167?timeout=10s\": dial tcp 172.31.23.167:6443: connect: connection refused" interval="200ms" Jan 16 23:59:53.543478 kubelet[3027]: I0116 23:59:53.542311 3027 factory.go:221] Registration of the containerd container factory successfully Jan 16 23:59:53.565814 kubelet[3027]: E0116 23:59:53.565765 3027 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 23:59:53.573810 kubelet[3027]: I0116 23:59:53.573731 3027 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 16 23:59:53.578191 kubelet[3027]: I0116 23:59:53.578125 3027 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 16 23:59:53.578191 kubelet[3027]: I0116 23:59:53.578175 3027 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 16 23:59:53.578375 kubelet[3027]: I0116 23:59:53.578206 3027 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 16 23:59:53.578375 kubelet[3027]: I0116 23:59:53.578225 3027 kubelet.go:2382] "Starting kubelet main sync loop" Jan 16 23:59:53.578375 kubelet[3027]: E0116 23:59:53.578289 3027 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 23:59:53.587555 kubelet[3027]: W0116 23:59:53.586548 3027 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.167:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.167:6443: connect: connection refused Jan 16 23:59:53.587555 kubelet[3027]: E0116 23:59:53.587037 3027 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.23.167:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.167:6443: connect: connection refused" logger="UnhandledError" Jan 16 23:59:53.594571 kubelet[3027]: I0116 23:59:53.594518 3027 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 16 23:59:53.594571 kubelet[3027]: I0116 23:59:53.594552 3027 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 16 23:59:53.594750 kubelet[3027]: I0116 23:59:53.594586 3027 state_mem.go:36] "Initialized new in-memory state store" Jan 16 23:59:53.599498 kubelet[3027]: I0116 23:59:53.599426 3027 policy_none.go:49] "None policy: Start" Jan 16 23:59:53.599498 kubelet[3027]: I0116 23:59:53.599499 3027 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 16 23:59:53.599684 kubelet[3027]: I0116 23:59:53.599525 3027 state_mem.go:35] "Initializing new in-memory state store" Jan 16 23:59:53.615485 kubelet[3027]: I0116 23:59:53.613236 3027 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 16 23:59:53.617808 kubelet[3027]: I0116 23:59:53.616634 3027 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 16 23:59:53.617808 kubelet[3027]: I0116 23:59:53.616669 3027 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 16 23:59:53.617808 kubelet[3027]: I0116 23:59:53.617116 3027 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 23:59:53.619519 kubelet[3027]: E0116 23:59:53.619489 3027 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 16 23:59:53.619857 kubelet[3027]: E0116 23:59:53.619834 3027 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-23-167\" not found" Jan 16 23:59:53.690703 kubelet[3027]: E0116 23:59:53.690570 3027 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-167\" not found" node="ip-172-31-23-167" Jan 16 23:59:53.691924 kubelet[3027]: E0116 23:59:53.691892 3027 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-167\" not found" node="ip-172-31-23-167" Jan 16 23:59:53.696937 kubelet[3027]: E0116 23:59:53.696900 3027 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-167\" not found" node="ip-172-31-23-167" Jan 16 23:59:53.718944 kubelet[3027]: I0116 23:59:53.718909 3027 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-167" Jan 16 23:59:53.719810 kubelet[3027]: E0116 23:59:53.719770 3027 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.167:6443/api/v1/nodes\": dial tcp 172.31.23.167:6443: connect: connection refused" node="ip-172-31-23-167" Jan 16 23:59:53.738257 kubelet[3027]: I0116 23:59:53.738211 3027 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/829552e57574c4a8c7a18c7fabd07afc-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-167\" (UID: \"829552e57574c4a8c7a18c7fabd07afc\") " pod="kube-system/kube-apiserver-ip-172-31-23-167" Jan 16 23:59:53.738435 kubelet[3027]: I0116 23:59:53.738409 3027 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1fd2787fe7cacbc14ed8843298513aee-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-167\" (UID: \"1fd2787fe7cacbc14ed8843298513aee\") " pod="kube-system/kube-controller-manager-ip-172-31-23-167" Jan 16 23:59:53.738607 kubelet[3027]: I0116 23:59:53.738584 3027 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1fd2787fe7cacbc14ed8843298513aee-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-167\" (UID: \"1fd2787fe7cacbc14ed8843298513aee\") " pod="kube-system/kube-controller-manager-ip-172-31-23-167" Jan 16 23:59:53.738765 kubelet[3027]: I0116 23:59:53.738743 3027 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1fd2787fe7cacbc14ed8843298513aee-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-167\" (UID: \"1fd2787fe7cacbc14ed8843298513aee\") " pod="kube-system/kube-controller-manager-ip-172-31-23-167" Jan 16 23:59:53.738940 kubelet[3027]: I0116 23:59:53.738884 3027 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1fd2787fe7cacbc14ed8843298513aee-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-167\" (UID: \"1fd2787fe7cacbc14ed8843298513aee\") " pod="kube-system/kube-controller-manager-ip-172-31-23-167" Jan 16 23:59:53.739108 kubelet[3027]: I0116 23:59:53.739084 3027 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7fa2bc7d0f83b1ccad4a1781934a9cba-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-167\" (UID: \"7fa2bc7d0f83b1ccad4a1781934a9cba\") " pod="kube-system/kube-scheduler-ip-172-31-23-167" Jan 16 23:59:53.739273 kubelet[3027]: I0116 23:59:53.739250 3027 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/829552e57574c4a8c7a18c7fabd07afc-ca-certs\") pod \"kube-apiserver-ip-172-31-23-167\" (UID: \"829552e57574c4a8c7a18c7fabd07afc\") " pod="kube-system/kube-apiserver-ip-172-31-23-167" Jan 16 23:59:53.739394 kubelet[3027]: I0116 23:59:53.739367 3027 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/829552e57574c4a8c7a18c7fabd07afc-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-167\" (UID: \"829552e57574c4a8c7a18c7fabd07afc\") " pod="kube-system/kube-apiserver-ip-172-31-23-167" Jan 16 23:59:53.739564 kubelet[3027]: I0116 23:59:53.739516 3027 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1fd2787fe7cacbc14ed8843298513aee-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-167\" (UID: \"1fd2787fe7cacbc14ed8843298513aee\") " pod="kube-system/kube-controller-manager-ip-172-31-23-167" Jan 16 23:59:53.741883 kubelet[3027]: E0116 23:59:53.741832 3027 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.167:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-167?timeout=10s\": dial tcp 172.31.23.167:6443: connect: connection refused" interval="400ms" Jan 16 23:59:53.922478 kubelet[3027]: I0116 23:59:53.922147 3027 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-167" Jan 16 23:59:53.922766 kubelet[3027]: E0116 23:59:53.922724 3027 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.167:6443/api/v1/nodes\": dial tcp 172.31.23.167:6443: connect: connection refused" node="ip-172-31-23-167" Jan 16 23:59:53.993500 containerd[2150]: time="2026-01-16T23:59:53.993343888Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-167,Uid:7fa2bc7d0f83b1ccad4a1781934a9cba,Namespace:kube-system,Attempt:0,}" Jan 16 23:59:53.994060 containerd[2150]: time="2026-01-16T23:59:53.993348460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-167,Uid:829552e57574c4a8c7a18c7fabd07afc,Namespace:kube-system,Attempt:0,}" Jan 16 23:59:53.999500 containerd[2150]: time="2026-01-16T23:59:53.999080728Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-167,Uid:1fd2787fe7cacbc14ed8843298513aee,Namespace:kube-system,Attempt:0,}" Jan 16 23:59:54.142538 kubelet[3027]: E0116 23:59:54.142429 3027 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.167:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-167?timeout=10s\": dial tcp 172.31.23.167:6443: connect: connection refused" interval="800ms" Jan 16 23:59:54.325312 kubelet[3027]: I0116 23:59:54.325161 3027 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-167" Jan 16 23:59:54.325942 kubelet[3027]: E0116 23:59:54.325728 3027 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.167:6443/api/v1/nodes\": dial tcp 172.31.23.167:6443: connect: connection refused" node="ip-172-31-23-167" Jan 16 23:59:54.503308 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3486146630.mount: Deactivated successfully. Jan 16 23:59:54.510686 containerd[2150]: time="2026-01-16T23:59:54.510612026Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 23:59:54.516242 containerd[2150]: time="2026-01-16T23:59:54.516160994Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 23:59:54.519287 containerd[2150]: time="2026-01-16T23:59:54.519227030Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 16 23:59:54.523186 containerd[2150]: time="2026-01-16T23:59:54.523106606Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 16 23:59:54.525200 containerd[2150]: time="2026-01-16T23:59:54.525150386Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 16 23:59:54.525494 containerd[2150]: time="2026-01-16T23:59:54.525421514Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 23:59:54.534898 containerd[2150]: time="2026-01-16T23:59:54.534806606Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 23:59:54.539258 containerd[2150]: time="2026-01-16T23:59:54.538890914Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 539.702066ms" Jan 16 23:59:54.543048 containerd[2150]: time="2026-01-16T23:59:54.542983442Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 549.496022ms" Jan 16 23:59:54.544149 containerd[2150]: time="2026-01-16T23:59:54.543401306Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 23:59:54.548163 containerd[2150]: time="2026-01-16T23:59:54.548103818Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 554.20889ms" Jan 16 23:59:54.550864 kubelet[3027]: W0116 23:59:54.550781 3027 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://172.31.23.167:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 172.31.23.167:6443: connect: connection refused Jan 16 23:59:54.551540 kubelet[3027]: E0116 23:59:54.550885 3027 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://172.31.23.167:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.23.167:6443: connect: connection refused" logger="UnhandledError" Jan 16 23:59:54.736531 containerd[2150]: time="2026-01-16T23:59:54.735756879Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:59:54.736531 containerd[2150]: time="2026-01-16T23:59:54.736225575Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:59:54.736967 containerd[2150]: time="2026-01-16T23:59:54.736869795Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:59:54.737268 containerd[2150]: time="2026-01-16T23:59:54.737197323Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:59:54.742553 containerd[2150]: time="2026-01-16T23:59:54.741655275Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:59:54.742553 containerd[2150]: time="2026-01-16T23:59:54.741761895Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:59:54.742553 containerd[2150]: time="2026-01-16T23:59:54.741798963Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:59:54.742553 containerd[2150]: time="2026-01-16T23:59:54.741973587Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:59:54.743648 containerd[2150]: time="2026-01-16T23:59:54.742791507Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 16 23:59:54.743648 containerd[2150]: time="2026-01-16T23:59:54.742878939Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 16 23:59:54.745020 containerd[2150]: time="2026-01-16T23:59:54.743887083Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:59:54.745020 containerd[2150]: time="2026-01-16T23:59:54.744127875Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 16 23:59:54.763097 kubelet[3027]: W0116 23:59:54.762850 3027 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://172.31.23.167:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-167&limit=500&resourceVersion=0": dial tcp 172.31.23.167:6443: connect: connection refused Jan 16 23:59:54.763097 kubelet[3027]: E0116 23:59:54.763070 3027 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://172.31.23.167:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-23-167&limit=500&resourceVersion=0\": dial tcp 172.31.23.167:6443: connect: connection refused" logger="UnhandledError" Jan 16 23:59:54.803506 kubelet[3027]: W0116 23:59:54.802073 3027 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://172.31.23.167:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 172.31.23.167:6443: connect: connection refused Jan 16 23:59:54.803506 kubelet[3027]: E0116 23:59:54.803223 3027 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://172.31.23.167:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.23.167:6443: connect: connection refused" logger="UnhandledError" Jan 16 23:59:54.901529 containerd[2150]: time="2026-01-16T23:59:54.901348840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-23-167,Uid:829552e57574c4a8c7a18c7fabd07afc,Namespace:kube-system,Attempt:0,} returns sandbox id \"96387a82976a045234c49070c2031d6f3657208a1ce79dd6e645aabfcda0562b\"" Jan 16 23:59:54.909788 containerd[2150]: time="2026-01-16T23:59:54.909588040Z" level=info msg="CreateContainer within sandbox \"96387a82976a045234c49070c2031d6f3657208a1ce79dd6e645aabfcda0562b\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 16 23:59:54.918994 containerd[2150]: time="2026-01-16T23:59:54.918917056Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-23-167,Uid:1fd2787fe7cacbc14ed8843298513aee,Namespace:kube-system,Attempt:0,} returns sandbox id \"faea05b96b5961e46554f84b5d818e2140c1e1e8edf82d3490c6a419c6e478ea\"" Jan 16 23:59:54.921195 containerd[2150]: time="2026-01-16T23:59:54.920904268Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-23-167,Uid:7fa2bc7d0f83b1ccad4a1781934a9cba,Namespace:kube-system,Attempt:0,} returns sandbox id \"a5766b3578598370c2dc652f8b30c4b376a96ed9551e33c28c84b96a2517c45e\"" Jan 16 23:59:54.929815 containerd[2150]: time="2026-01-16T23:59:54.929643316Z" level=info msg="CreateContainer within sandbox \"a5766b3578598370c2dc652f8b30c4b376a96ed9551e33c28c84b96a2517c45e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 16 23:59:54.929815 containerd[2150]: time="2026-01-16T23:59:54.929702944Z" level=info msg="CreateContainer within sandbox \"faea05b96b5961e46554f84b5d818e2140c1e1e8edf82d3490c6a419c6e478ea\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 16 23:59:54.944386 containerd[2150]: time="2026-01-16T23:59:54.944165668Z" level=info msg="CreateContainer within sandbox \"96387a82976a045234c49070c2031d6f3657208a1ce79dd6e645aabfcda0562b\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"1edc88abd6ab34d0897b1121cba575d0cb115a71fd638c0ca745037675536d02\"" Jan 16 23:59:54.944549 kubelet[3027]: E0116 23:59:54.944181 3027 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.23.167:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-167?timeout=10s\": dial tcp 172.31.23.167:6443: connect: connection refused" interval="1.6s" Jan 16 23:59:54.945700 containerd[2150]: time="2026-01-16T23:59:54.945550396Z" level=info msg="StartContainer for \"1edc88abd6ab34d0897b1121cba575d0cb115a71fd638c0ca745037675536d02\"" Jan 16 23:59:54.957050 containerd[2150]: time="2026-01-16T23:59:54.956842300Z" level=info msg="CreateContainer within sandbox \"a5766b3578598370c2dc652f8b30c4b376a96ed9551e33c28c84b96a2517c45e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"1f9391ad6bb9398bef42f5fe0cf5903b3ea3709c2eb2f022de6447338786300e\"" Jan 16 23:59:54.957728 containerd[2150]: time="2026-01-16T23:59:54.957609220Z" level=info msg="StartContainer for \"1f9391ad6bb9398bef42f5fe0cf5903b3ea3709c2eb2f022de6447338786300e\"" Jan 16 23:59:54.961289 containerd[2150]: time="2026-01-16T23:59:54.961070728Z" level=info msg="CreateContainer within sandbox \"faea05b96b5961e46554f84b5d818e2140c1e1e8edf82d3490c6a419c6e478ea\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"f481c7329a34b9f0c212439f9dedd0acee9a902b2d602ab62b1cbd851539bbdd\"" Jan 16 23:59:54.962341 containerd[2150]: time="2026-01-16T23:59:54.962296252Z" level=info msg="StartContainer for \"f481c7329a34b9f0c212439f9dedd0acee9a902b2d602ab62b1cbd851539bbdd\"" Jan 16 23:59:55.131883 kubelet[3027]: I0116 23:59:55.131572 3027 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-167" Jan 16 23:59:55.133164 kubelet[3027]: E0116 23:59:55.133067 3027 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.23.167:6443/api/v1/nodes\": dial tcp 172.31.23.167:6443: connect: connection refused" node="ip-172-31-23-167" Jan 16 23:59:55.177182 containerd[2150]: time="2026-01-16T23:59:55.175227746Z" level=info msg="StartContainer for \"1edc88abd6ab34d0897b1121cba575d0cb115a71fd638c0ca745037675536d02\" returns successfully" Jan 16 23:59:55.177182 containerd[2150]: time="2026-01-16T23:59:55.175285142Z" level=info msg="StartContainer for \"f481c7329a34b9f0c212439f9dedd0acee9a902b2d602ab62b1cbd851539bbdd\" returns successfully" Jan 16 23:59:55.181850 kubelet[3027]: W0116 23:59:55.181700 3027 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://172.31.23.167:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 172.31.23.167:6443: connect: connection refused Jan 16 23:59:55.181850 kubelet[3027]: E0116 23:59:55.181807 3027 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://172.31.23.167:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.23.167:6443: connect: connection refused" logger="UnhandledError" Jan 16 23:59:55.193931 containerd[2150]: time="2026-01-16T23:59:55.193750694Z" level=info msg="StartContainer for \"1f9391ad6bb9398bef42f5fe0cf5903b3ea3709c2eb2f022de6447338786300e\" returns successfully" Jan 16 23:59:55.602249 kubelet[3027]: E0116 23:59:55.602041 3027 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-167\" not found" node="ip-172-31-23-167" Jan 16 23:59:55.613201 kubelet[3027]: E0116 23:59:55.613088 3027 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-167\" not found" node="ip-172-31-23-167" Jan 16 23:59:55.621293 kubelet[3027]: E0116 23:59:55.621052 3027 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-167\" not found" node="ip-172-31-23-167" Jan 16 23:59:56.627128 kubelet[3027]: E0116 23:59:56.627092 3027 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-167\" not found" node="ip-172-31-23-167" Jan 16 23:59:56.629265 kubelet[3027]: E0116 23:59:56.628641 3027 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-23-167\" not found" node="ip-172-31-23-167" Jan 16 23:59:56.739470 kubelet[3027]: I0116 23:59:56.736877 3027 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-167" Jan 16 23:59:59.377096 kubelet[3027]: E0116 23:59:59.377033 3027 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-23-167\" not found" node="ip-172-31-23-167" Jan 16 23:59:59.441995 kubelet[3027]: I0116 23:59:59.438152 3027 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-23-167" Jan 16 23:59:59.441995 kubelet[3027]: I0116 23:59:59.441493 3027 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-23-167" Jan 16 23:59:59.498099 kubelet[3027]: E0116 23:59:59.498032 3027 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-23-167\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-23-167" Jan 16 23:59:59.498099 kubelet[3027]: I0116 23:59:59.498083 3027 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-167" Jan 16 23:59:59.508009 kubelet[3027]: I0116 23:59:59.507940 3027 apiserver.go:52] "Watching apiserver" Jan 16 23:59:59.511355 kubelet[3027]: E0116 23:59:59.511303 3027 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-23-167\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-23-167" Jan 16 23:59:59.511355 kubelet[3027]: I0116 23:59:59.511351 3027 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-23-167" Jan 16 23:59:59.517827 kubelet[3027]: E0116 23:59:59.517771 3027 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-23-167\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-23-167" Jan 16 23:59:59.536419 kubelet[3027]: I0116 23:59:59.536260 3027 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 16 23:59:59.773525 kubelet[3027]: I0116 23:59:59.773470 3027 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-23-167" Jan 16 23:59:59.789307 kubelet[3027]: E0116 23:59:59.789255 3027 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-23-167\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-23-167" Jan 17 00:00:00.667336 systemd[1]: Started logrotate.service - Rotate and Compress System Logs. Jan 17 00:00:00.694465 systemd[1]: logrotate.service: Deactivated successfully. Jan 17 00:00:01.757287 systemd[1]: Reloading requested from client PID 3307 ('systemctl') (unit session-7.scope)... Jan 17 00:00:01.757311 systemd[1]: Reloading... Jan 17 00:00:01.918487 zram_generator::config[3350]: No configuration found. Jan 17 00:00:02.192894 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 17 00:00:02.390821 systemd[1]: Reloading finished in 632 ms. Jan 17 00:00:02.451011 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:00:02.462139 systemd[1]: kubelet.service: Deactivated successfully. Jan 17 00:00:02.462893 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:00:02.474277 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 17 00:00:02.849712 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 17 00:00:02.864963 (kubelet)[3417]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 17 00:00:02.949645 kubelet[3417]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:00:02.949645 kubelet[3417]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 17 00:00:02.949645 kubelet[3417]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 17 00:00:02.951117 kubelet[3417]: I0117 00:00:02.949678 3417 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 17 00:00:02.964505 kubelet[3417]: I0117 00:00:02.963584 3417 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 17 00:00:02.964505 kubelet[3417]: I0117 00:00:02.963634 3417 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 17 00:00:02.964505 kubelet[3417]: I0117 00:00:02.964094 3417 server.go:954] "Client rotation is on, will bootstrap in background" Jan 17 00:00:02.966537 kubelet[3417]: I0117 00:00:02.966498 3417 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 17 00:00:02.972048 kubelet[3417]: I0117 00:00:02.971774 3417 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 17 00:00:02.999617 kubelet[3417]: E0117 00:00:02.998699 3417 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 17 00:00:02.999617 kubelet[3417]: I0117 00:00:02.998764 3417 server.go:1421] "CRI implementation should be updated to support RuntimeConfig when KubeletCgroupDriverFromCRI feature gate has been enabled. Falling back to using cgroupDriver from kubelet config." Jan 17 00:00:03.011948 kubelet[3417]: I0117 00:00:03.011905 3417 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 17 00:00:03.013484 kubelet[3417]: I0117 00:00:03.012950 3417 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 17 00:00:03.013484 kubelet[3417]: I0117 00:00:03.013011 3417 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-23-167","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"cgroupfs","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":1} Jan 17 00:00:03.013484 kubelet[3417]: I0117 00:00:03.013307 3417 topology_manager.go:138] "Creating topology manager with none policy" Jan 17 00:00:03.013484 kubelet[3417]: I0117 00:00:03.013326 3417 container_manager_linux.go:304] "Creating device plugin manager" Jan 17 00:00:03.013904 kubelet[3417]: I0117 00:00:03.013404 3417 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:00:03.013904 kubelet[3417]: I0117 00:00:03.013684 3417 kubelet.go:446] "Attempting to sync node with API server" Jan 17 00:00:03.013904 kubelet[3417]: I0117 00:00:03.013708 3417 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 17 00:00:03.017612 kubelet[3417]: I0117 00:00:03.015239 3417 kubelet.go:352] "Adding apiserver pod source" Jan 17 00:00:03.017612 kubelet[3417]: I0117 00:00:03.017507 3417 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 17 00:00:03.022791 kubelet[3417]: I0117 00:00:03.022389 3417 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 17 00:00:03.023242 kubelet[3417]: I0117 00:00:03.023201 3417 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 17 00:00:03.024524 kubelet[3417]: I0117 00:00:03.023936 3417 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 17 00:00:03.024524 kubelet[3417]: I0117 00:00:03.023993 3417 server.go:1287] "Started kubelet" Jan 17 00:00:03.036412 kubelet[3417]: I0117 00:00:03.032471 3417 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 17 00:00:03.040733 kubelet[3417]: I0117 00:00:03.040654 3417 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 17 00:00:03.045075 kubelet[3417]: I0117 00:00:03.044990 3417 server.go:479] "Adding debug handlers to kubelet server" Jan 17 00:00:03.049508 kubelet[3417]: I0117 00:00:03.048645 3417 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 17 00:00:03.049508 kubelet[3417]: I0117 00:00:03.049092 3417 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 17 00:00:03.049508 kubelet[3417]: I0117 00:00:03.049415 3417 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 17 00:00:03.053951 kubelet[3417]: I0117 00:00:03.053853 3417 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 17 00:00:03.055207 kubelet[3417]: E0117 00:00:03.054239 3417 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ip-172-31-23-167\" not found" Jan 17 00:00:03.055207 kubelet[3417]: I0117 00:00:03.055074 3417 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 17 00:00:03.062364 kubelet[3417]: I0117 00:00:03.062258 3417 reconciler.go:26] "Reconciler: start to sync state" Jan 17 00:00:03.109873 kubelet[3417]: I0117 00:00:03.109436 3417 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 17 00:00:03.114007 kubelet[3417]: I0117 00:00:03.113933 3417 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 17 00:00:03.114286 kubelet[3417]: I0117 00:00:03.114190 3417 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 17 00:00:03.114664 kubelet[3417]: I0117 00:00:03.114224 3417 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 17 00:00:03.116014 kubelet[3417]: I0117 00:00:03.115529 3417 kubelet.go:2382] "Starting kubelet main sync loop" Jan 17 00:00:03.116014 kubelet[3417]: E0117 00:00:03.115637 3417 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 17 00:00:03.126645 kubelet[3417]: I0117 00:00:03.124326 3417 factory.go:221] Registration of the systemd container factory successfully Jan 17 00:00:03.126645 kubelet[3417]: I0117 00:00:03.124607 3417 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 17 00:00:03.158105 kubelet[3417]: E0117 00:00:03.158047 3417 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 17 00:00:03.163277 kubelet[3417]: I0117 00:00:03.163242 3417 factory.go:221] Registration of the containerd container factory successfully Jan 17 00:00:03.215872 kubelet[3417]: E0117 00:00:03.215740 3417 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 17 00:00:03.324345 update_engine[2114]: I20260117 00:00:03.322811 2114 update_attempter.cc:509] Updating boot flags... Jan 17 00:00:03.325531 kubelet[3417]: I0117 00:00:03.324858 3417 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 17 00:00:03.325531 kubelet[3417]: I0117 00:00:03.324887 3417 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 17 00:00:03.325531 kubelet[3417]: I0117 00:00:03.324928 3417 state_mem.go:36] "Initialized new in-memory state store" Jan 17 00:00:03.325531 kubelet[3417]: I0117 00:00:03.325275 3417 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 17 00:00:03.325531 kubelet[3417]: I0117 00:00:03.325299 3417 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 17 00:00:03.325531 kubelet[3417]: I0117 00:00:03.325338 3417 policy_none.go:49] "None policy: Start" Jan 17 00:00:03.325531 kubelet[3417]: I0117 00:00:03.325357 3417 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 17 00:00:03.325531 kubelet[3417]: I0117 00:00:03.325379 3417 state_mem.go:35] "Initializing new in-memory state store" Jan 17 00:00:03.326618 kubelet[3417]: I0117 00:00:03.326242 3417 state_mem.go:75] "Updated machine memory state" Jan 17 00:00:03.334322 kubelet[3417]: I0117 00:00:03.334286 3417 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 17 00:00:03.334771 kubelet[3417]: I0117 00:00:03.334749 3417 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 17 00:00:03.335618 kubelet[3417]: I0117 00:00:03.335174 3417 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 17 00:00:03.347788 kubelet[3417]: I0117 00:00:03.347706 3417 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 17 00:00:03.351283 kubelet[3417]: E0117 00:00:03.351153 3417 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 17 00:00:03.421483 kubelet[3417]: I0117 00:00:03.418054 3417 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-23-167" Jan 17 00:00:03.421483 kubelet[3417]: I0117 00:00:03.419899 3417 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-23-167" Jan 17 00:00:03.424546 kubelet[3417]: I0117 00:00:03.424501 3417 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-23-167" Jan 17 00:00:03.462761 kubelet[3417]: I0117 00:00:03.462722 3417 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-23-167" Jan 17 00:00:03.473965 kubelet[3417]: I0117 00:00:03.472909 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/829552e57574c4a8c7a18c7fabd07afc-k8s-certs\") pod \"kube-apiserver-ip-172-31-23-167\" (UID: \"829552e57574c4a8c7a18c7fabd07afc\") " pod="kube-system/kube-apiserver-ip-172-31-23-167" Jan 17 00:00:03.473965 kubelet[3417]: I0117 00:00:03.473084 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/829552e57574c4a8c7a18c7fabd07afc-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-23-167\" (UID: \"829552e57574c4a8c7a18c7fabd07afc\") " pod="kube-system/kube-apiserver-ip-172-31-23-167" Jan 17 00:00:03.473965 kubelet[3417]: I0117 00:00:03.473245 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/1fd2787fe7cacbc14ed8843298513aee-ca-certs\") pod \"kube-controller-manager-ip-172-31-23-167\" (UID: \"1fd2787fe7cacbc14ed8843298513aee\") " pod="kube-system/kube-controller-manager-ip-172-31-23-167" Jan 17 00:00:03.473965 kubelet[3417]: I0117 00:00:03.473797 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/1fd2787fe7cacbc14ed8843298513aee-kubeconfig\") pod \"kube-controller-manager-ip-172-31-23-167\" (UID: \"1fd2787fe7cacbc14ed8843298513aee\") " pod="kube-system/kube-controller-manager-ip-172-31-23-167" Jan 17 00:00:03.474509 kubelet[3417]: I0117 00:00:03.474130 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/829552e57574c4a8c7a18c7fabd07afc-ca-certs\") pod \"kube-apiserver-ip-172-31-23-167\" (UID: \"829552e57574c4a8c7a18c7fabd07afc\") " pod="kube-system/kube-apiserver-ip-172-31-23-167" Jan 17 00:00:03.475432 kubelet[3417]: I0117 00:00:03.474859 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/1fd2787fe7cacbc14ed8843298513aee-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-23-167\" (UID: \"1fd2787fe7cacbc14ed8843298513aee\") " pod="kube-system/kube-controller-manager-ip-172-31-23-167" Jan 17 00:00:03.478693 kubelet[3417]: I0117 00:00:03.477826 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/1fd2787fe7cacbc14ed8843298513aee-k8s-certs\") pod \"kube-controller-manager-ip-172-31-23-167\" (UID: \"1fd2787fe7cacbc14ed8843298513aee\") " pod="kube-system/kube-controller-manager-ip-172-31-23-167" Jan 17 00:00:03.478693 kubelet[3417]: I0117 00:00:03.477921 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/1fd2787fe7cacbc14ed8843298513aee-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-23-167\" (UID: \"1fd2787fe7cacbc14ed8843298513aee\") " pod="kube-system/kube-controller-manager-ip-172-31-23-167" Jan 17 00:00:03.478693 kubelet[3417]: I0117 00:00:03.478175 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7fa2bc7d0f83b1ccad4a1781934a9cba-kubeconfig\") pod \"kube-scheduler-ip-172-31-23-167\" (UID: \"7fa2bc7d0f83b1ccad4a1781934a9cba\") " pod="kube-system/kube-scheduler-ip-172-31-23-167" Jan 17 00:00:03.494963 kubelet[3417]: I0117 00:00:03.494865 3417 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-23-167" Jan 17 00:00:03.495926 kubelet[3417]: I0117 00:00:03.495583 3417 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-23-167" Jan 17 00:00:03.533151 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3467) Jan 17 00:00:03.913623 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 37 scanned by (udev-worker) (3467) Jan 17 00:00:04.019075 kubelet[3417]: I0117 00:00:04.018707 3417 apiserver.go:52] "Watching apiserver" Jan 17 00:00:04.058363 kubelet[3417]: I0117 00:00:04.057462 3417 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 17 00:00:04.277122 kubelet[3417]: I0117 00:00:04.276579 3417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-23-167" podStartSLOduration=1.276555719 podStartE2EDuration="1.276555719s" podCreationTimestamp="2026-01-17 00:00:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:00:04.261745691 +0000 UTC m=+1.390755776" watchObservedRunningTime="2026-01-17 00:00:04.276555719 +0000 UTC m=+1.405565804" Jan 17 00:00:04.295015 kubelet[3417]: I0117 00:00:04.293155 3417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-23-167" podStartSLOduration=1.2931319669999999 podStartE2EDuration="1.293131967s" podCreationTimestamp="2026-01-17 00:00:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:00:04.277693439 +0000 UTC m=+1.406703524" watchObservedRunningTime="2026-01-17 00:00:04.293131967 +0000 UTC m=+1.422142064" Jan 17 00:00:04.295015 kubelet[3417]: I0117 00:00:04.293310 3417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-23-167" podStartSLOduration=1.293301887 podStartE2EDuration="1.293301887s" podCreationTimestamp="2026-01-17 00:00:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:00:04.293110235 +0000 UTC m=+1.422120320" watchObservedRunningTime="2026-01-17 00:00:04.293301887 +0000 UTC m=+1.422311984" Jan 17 00:00:06.222436 kubelet[3417]: I0117 00:00:06.222384 3417 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 17 00:00:06.223305 containerd[2150]: time="2026-01-17T00:00:06.223253424Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 17 00:00:06.223822 kubelet[3417]: I0117 00:00:06.223642 3417 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 17 00:00:07.111819 kubelet[3417]: I0117 00:00:07.111718 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/15a720b0-ae35-4dcc-87a8-4a2baf221fc0-lib-modules\") pod \"kube-proxy-qbc8s\" (UID: \"15a720b0-ae35-4dcc-87a8-4a2baf221fc0\") " pod="kube-system/kube-proxy-qbc8s" Jan 17 00:00:07.111819 kubelet[3417]: I0117 00:00:07.111787 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r4xtp\" (UniqueName: \"kubernetes.io/projected/15a720b0-ae35-4dcc-87a8-4a2baf221fc0-kube-api-access-r4xtp\") pod \"kube-proxy-qbc8s\" (UID: \"15a720b0-ae35-4dcc-87a8-4a2baf221fc0\") " pod="kube-system/kube-proxy-qbc8s" Jan 17 00:00:07.112881 kubelet[3417]: I0117 00:00:07.112760 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/15a720b0-ae35-4dcc-87a8-4a2baf221fc0-kube-proxy\") pod \"kube-proxy-qbc8s\" (UID: \"15a720b0-ae35-4dcc-87a8-4a2baf221fc0\") " pod="kube-system/kube-proxy-qbc8s" Jan 17 00:00:07.112881 kubelet[3417]: I0117 00:00:07.112851 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/15a720b0-ae35-4dcc-87a8-4a2baf221fc0-xtables-lock\") pod \"kube-proxy-qbc8s\" (UID: \"15a720b0-ae35-4dcc-87a8-4a2baf221fc0\") " pod="kube-system/kube-proxy-qbc8s" Jan 17 00:00:07.354924 containerd[2150]: time="2026-01-17T00:00:07.353664554Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qbc8s,Uid:15a720b0-ae35-4dcc-87a8-4a2baf221fc0,Namespace:kube-system,Attempt:0,}" Jan 17 00:00:07.408940 containerd[2150]: time="2026-01-17T00:00:07.408717974Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:00:07.409764 containerd[2150]: time="2026-01-17T00:00:07.409676774Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:00:07.409764 containerd[2150]: time="2026-01-17T00:00:07.409747310Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:00:07.410221 containerd[2150]: time="2026-01-17T00:00:07.410054414Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:00:07.417498 kubelet[3417]: I0117 00:00:07.414321 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/11c64c0f-b3de-476c-b88d-e4b12618deab-var-lib-calico\") pod \"tigera-operator-7dcd859c48-mb6cm\" (UID: \"11c64c0f-b3de-476c-b88d-e4b12618deab\") " pod="tigera-operator/tigera-operator-7dcd859c48-mb6cm" Jan 17 00:00:07.417498 kubelet[3417]: I0117 00:00:07.414408 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rjtwk\" (UniqueName: \"kubernetes.io/projected/11c64c0f-b3de-476c-b88d-e4b12618deab-kube-api-access-rjtwk\") pod \"tigera-operator-7dcd859c48-mb6cm\" (UID: \"11c64c0f-b3de-476c-b88d-e4b12618deab\") " pod="tigera-operator/tigera-operator-7dcd859c48-mb6cm" Jan 17 00:00:07.456383 systemd[1]: run-containerd-runc-k8s.io-deff4071bdd5b3d093914e5dc4e912cd5fa645249729975d72212b03d4be4f02-runc.snpcjz.mount: Deactivated successfully. Jan 17 00:00:07.490905 containerd[2150]: time="2026-01-17T00:00:07.490812051Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-qbc8s,Uid:15a720b0-ae35-4dcc-87a8-4a2baf221fc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"deff4071bdd5b3d093914e5dc4e912cd5fa645249729975d72212b03d4be4f02\"" Jan 17 00:00:07.501377 containerd[2150]: time="2026-01-17T00:00:07.501204759Z" level=info msg="CreateContainer within sandbox \"deff4071bdd5b3d093914e5dc4e912cd5fa645249729975d72212b03d4be4f02\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 17 00:00:07.522227 containerd[2150]: time="2026-01-17T00:00:07.521703039Z" level=info msg="CreateContainer within sandbox \"deff4071bdd5b3d093914e5dc4e912cd5fa645249729975d72212b03d4be4f02\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"ae61d19b1f5c14769f4294efd6c65e651162f4999faa1517473b31cf41016079\"" Jan 17 00:00:07.528088 containerd[2150]: time="2026-01-17T00:00:07.527059779Z" level=info msg="StartContainer for \"ae61d19b1f5c14769f4294efd6c65e651162f4999faa1517473b31cf41016079\"" Jan 17 00:00:07.633809 containerd[2150]: time="2026-01-17T00:00:07.633181443Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-mb6cm,Uid:11c64c0f-b3de-476c-b88d-e4b12618deab,Namespace:tigera-operator,Attempt:0,}" Jan 17 00:00:07.645035 containerd[2150]: time="2026-01-17T00:00:07.644963223Z" level=info msg="StartContainer for \"ae61d19b1f5c14769f4294efd6c65e651162f4999faa1517473b31cf41016079\" returns successfully" Jan 17 00:00:07.684247 containerd[2150]: time="2026-01-17T00:00:07.683690920Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:00:07.684247 containerd[2150]: time="2026-01-17T00:00:07.683793676Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:00:07.684247 containerd[2150]: time="2026-01-17T00:00:07.683831500Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:00:07.684247 containerd[2150]: time="2026-01-17T00:00:07.683996560Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:00:07.789752 containerd[2150]: time="2026-01-17T00:00:07.789612916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-mb6cm,Uid:11c64c0f-b3de-476c-b88d-e4b12618deab,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"9e7078619ae72f64ea683d7d33979867adeb4acd4a534a2e566afea3a7229d29\"" Jan 17 00:00:07.796816 containerd[2150]: time="2026-01-17T00:00:07.796058872Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 17 00:00:08.314749 kubelet[3417]: I0117 00:00:08.313258 3417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-qbc8s" podStartSLOduration=1.313233699 podStartE2EDuration="1.313233699s" podCreationTimestamp="2026-01-17 00:00:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:00:08.269572827 +0000 UTC m=+5.398582924" watchObservedRunningTime="2026-01-17 00:00:08.313233699 +0000 UTC m=+5.442243796" Jan 17 00:00:10.572303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3575350632.mount: Deactivated successfully. Jan 17 00:00:11.296786 containerd[2150]: time="2026-01-17T00:00:11.296700546Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:11.299257 containerd[2150]: time="2026-01-17T00:00:11.298926006Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Jan 17 00:00:11.300555 containerd[2150]: time="2026-01-17T00:00:11.300499530Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:11.304878 containerd[2150]: time="2026-01-17T00:00:11.304799634Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:11.306769 containerd[2150]: time="2026-01-17T00:00:11.306585618Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 3.510459594s" Jan 17 00:00:11.306769 containerd[2150]: time="2026-01-17T00:00:11.306638550Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 17 00:00:11.312527 containerd[2150]: time="2026-01-17T00:00:11.312430074Z" level=info msg="CreateContainer within sandbox \"9e7078619ae72f64ea683d7d33979867adeb4acd4a534a2e566afea3a7229d29\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 17 00:00:11.332108 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount282168221.mount: Deactivated successfully. Jan 17 00:00:11.334141 containerd[2150]: time="2026-01-17T00:00:11.333054138Z" level=info msg="CreateContainer within sandbox \"9e7078619ae72f64ea683d7d33979867adeb4acd4a534a2e566afea3a7229d29\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"4515b18dbd1736db010806a39eeca8c8a2d7ed37a6784a6da203c99a0fd25496\"" Jan 17 00:00:11.337474 containerd[2150]: time="2026-01-17T00:00:11.335128830Z" level=info msg="StartContainer for \"4515b18dbd1736db010806a39eeca8c8a2d7ed37a6784a6da203c99a0fd25496\"" Jan 17 00:00:11.446781 containerd[2150]: time="2026-01-17T00:00:11.446621550Z" level=info msg="StartContainer for \"4515b18dbd1736db010806a39eeca8c8a2d7ed37a6784a6da203c99a0fd25496\" returns successfully" Jan 17 00:00:12.280352 kubelet[3417]: I0117 00:00:12.280129 3417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-mb6cm" podStartSLOduration=1.765814888 podStartE2EDuration="5.28010727s" podCreationTimestamp="2026-01-17 00:00:07 +0000 UTC" firstStartedPulling="2026-01-17 00:00:07.794228668 +0000 UTC m=+4.923238765" lastFinishedPulling="2026-01-17 00:00:11.30852105 +0000 UTC m=+8.437531147" observedRunningTime="2026-01-17 00:00:12.279421182 +0000 UTC m=+9.408431267" watchObservedRunningTime="2026-01-17 00:00:12.28010727 +0000 UTC m=+9.409117379" Jan 17 00:00:20.259173 sudo[2498]: pam_unix(sudo:session): session closed for user root Jan 17 00:00:20.344755 sshd[2494]: pam_unix(sshd:session): session closed for user core Jan 17 00:00:20.359014 systemd[1]: sshd@6-172.31.23.167:22-68.220.241.50:47832.service: Deactivated successfully. Jan 17 00:00:20.373761 systemd[1]: session-7.scope: Deactivated successfully. Jan 17 00:00:20.382473 systemd-logind[2113]: Session 7 logged out. Waiting for processes to exit. Jan 17 00:00:20.385930 systemd-logind[2113]: Removed session 7. Jan 17 00:00:36.630763 kubelet[3417]: I0117 00:00:36.630691 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kvqhh\" (UniqueName: \"kubernetes.io/projected/478ed9ef-fdf5-4c11-a5fa-63600d79f09c-kube-api-access-kvqhh\") pod \"calico-typha-7868898746-9x4vb\" (UID: \"478ed9ef-fdf5-4c11-a5fa-63600d79f09c\") " pod="calico-system/calico-typha-7868898746-9x4vb" Jan 17 00:00:36.632721 kubelet[3417]: I0117 00:00:36.630783 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/478ed9ef-fdf5-4c11-a5fa-63600d79f09c-typha-certs\") pod \"calico-typha-7868898746-9x4vb\" (UID: \"478ed9ef-fdf5-4c11-a5fa-63600d79f09c\") " pod="calico-system/calico-typha-7868898746-9x4vb" Jan 17 00:00:36.632721 kubelet[3417]: I0117 00:00:36.630880 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/478ed9ef-fdf5-4c11-a5fa-63600d79f09c-tigera-ca-bundle\") pod \"calico-typha-7868898746-9x4vb\" (UID: \"478ed9ef-fdf5-4c11-a5fa-63600d79f09c\") " pod="calico-system/calico-typha-7868898746-9x4vb" Jan 17 00:00:36.726421 kubelet[3417]: I0117 00:00:36.726328 3417 status_manager.go:890] "Failed to get status for pod" podUID="802fe422-66a7-48d5-8525-6ea9bd3c886a" pod="calico-system/calico-node-ckhl9" err="pods \"calico-node-ckhl9\" is forbidden: User \"system:node:ip-172-31-23-167\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-23-167' and this object" Jan 17 00:00:36.727533 kubelet[3417]: W0117 00:00:36.727467 3417 reflector.go:569] object-"calico-system"/"node-certs": failed to list *v1.Secret: secrets "node-certs" is forbidden: User "system:node:ip-172-31-23-167" cannot list resource "secrets" in API group "" in the namespace "calico-system": no relationship found between node 'ip-172-31-23-167' and this object Jan 17 00:00:36.727681 kubelet[3417]: E0117 00:00:36.727547 3417 reflector.go:166] "Unhandled Error" err="object-\"calico-system\"/\"node-certs\": Failed to watch *v1.Secret: failed to list *v1.Secret: secrets \"node-certs\" is forbidden: User \"system:node:ip-172-31-23-167\" cannot list resource \"secrets\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ip-172-31-23-167' and this object" logger="UnhandledError" Jan 17 00:00:36.832985 kubelet[3417]: I0117 00:00:36.832864 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/802fe422-66a7-48d5-8525-6ea9bd3c886a-policysync\") pod \"calico-node-ckhl9\" (UID: \"802fe422-66a7-48d5-8525-6ea9bd3c886a\") " pod="calico-system/calico-node-ckhl9" Jan 17 00:00:36.833159 kubelet[3417]: I0117 00:00:36.833011 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/802fe422-66a7-48d5-8525-6ea9bd3c886a-tigera-ca-bundle\") pod \"calico-node-ckhl9\" (UID: \"802fe422-66a7-48d5-8525-6ea9bd3c886a\") " pod="calico-system/calico-node-ckhl9" Jan 17 00:00:36.833159 kubelet[3417]: I0117 00:00:36.833052 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/802fe422-66a7-48d5-8525-6ea9bd3c886a-xtables-lock\") pod \"calico-node-ckhl9\" (UID: \"802fe422-66a7-48d5-8525-6ea9bd3c886a\") " pod="calico-system/calico-node-ckhl9" Jan 17 00:00:36.833159 kubelet[3417]: I0117 00:00:36.833090 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/802fe422-66a7-48d5-8525-6ea9bd3c886a-flexvol-driver-host\") pod \"calico-node-ckhl9\" (UID: \"802fe422-66a7-48d5-8525-6ea9bd3c886a\") " pod="calico-system/calico-node-ckhl9" Jan 17 00:00:36.833159 kubelet[3417]: I0117 00:00:36.833126 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/802fe422-66a7-48d5-8525-6ea9bd3c886a-var-run-calico\") pod \"calico-node-ckhl9\" (UID: \"802fe422-66a7-48d5-8525-6ea9bd3c886a\") " pod="calico-system/calico-node-ckhl9" Jan 17 00:00:36.833382 kubelet[3417]: I0117 00:00:36.833164 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/802fe422-66a7-48d5-8525-6ea9bd3c886a-cni-log-dir\") pod \"calico-node-ckhl9\" (UID: \"802fe422-66a7-48d5-8525-6ea9bd3c886a\") " pod="calico-system/calico-node-ckhl9" Jan 17 00:00:36.833382 kubelet[3417]: I0117 00:00:36.833198 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/802fe422-66a7-48d5-8525-6ea9bd3c886a-node-certs\") pod \"calico-node-ckhl9\" (UID: \"802fe422-66a7-48d5-8525-6ea9bd3c886a\") " pod="calico-system/calico-node-ckhl9" Jan 17 00:00:36.833382 kubelet[3417]: I0117 00:00:36.833235 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/802fe422-66a7-48d5-8525-6ea9bd3c886a-lib-modules\") pod \"calico-node-ckhl9\" (UID: \"802fe422-66a7-48d5-8525-6ea9bd3c886a\") " pod="calico-system/calico-node-ckhl9" Jan 17 00:00:36.833382 kubelet[3417]: I0117 00:00:36.833271 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/802fe422-66a7-48d5-8525-6ea9bd3c886a-var-lib-calico\") pod \"calico-node-ckhl9\" (UID: \"802fe422-66a7-48d5-8525-6ea9bd3c886a\") " pod="calico-system/calico-node-ckhl9" Jan 17 00:00:36.833382 kubelet[3417]: I0117 00:00:36.833309 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/802fe422-66a7-48d5-8525-6ea9bd3c886a-cni-net-dir\") pod \"calico-node-ckhl9\" (UID: \"802fe422-66a7-48d5-8525-6ea9bd3c886a\") " pod="calico-system/calico-node-ckhl9" Jan 17 00:00:36.833684 kubelet[3417]: I0117 00:00:36.833348 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/802fe422-66a7-48d5-8525-6ea9bd3c886a-cni-bin-dir\") pod \"calico-node-ckhl9\" (UID: \"802fe422-66a7-48d5-8525-6ea9bd3c886a\") " pod="calico-system/calico-node-ckhl9" Jan 17 00:00:36.833684 kubelet[3417]: I0117 00:00:36.833382 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wcfvk\" (UniqueName: \"kubernetes.io/projected/802fe422-66a7-48d5-8525-6ea9bd3c886a-kube-api-access-wcfvk\") pod \"calico-node-ckhl9\" (UID: \"802fe422-66a7-48d5-8525-6ea9bd3c886a\") " pod="calico-system/calico-node-ckhl9" Jan 17 00:00:36.859513 containerd[2150]: time="2026-01-17T00:00:36.858784965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7868898746-9x4vb,Uid:478ed9ef-fdf5-4c11-a5fa-63600d79f09c,Namespace:calico-system,Attempt:0,}" Jan 17 00:00:36.957223 kubelet[3417]: E0117 00:00:36.954157 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jjr5r" podUID="7326267c-1eb2-4759-b98f-e8dc2742ecd4" Jan 17 00:00:36.958127 kubelet[3417]: E0117 00:00:36.955579 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:36.958127 kubelet[3417]: W0117 00:00:36.958127 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:36.958127 kubelet[3417]: E0117 00:00:36.958171 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:36.969246 containerd[2150]: time="2026-01-17T00:00:36.967989093Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:00:36.969246 containerd[2150]: time="2026-01-17T00:00:36.968150013Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:00:36.969246 containerd[2150]: time="2026-01-17T00:00:36.968181081Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:00:36.969246 containerd[2150]: time="2026-01-17T00:00:36.968434053Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:00:36.974401 kubelet[3417]: E0117 00:00:36.974347 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:36.974401 kubelet[3417]: W0117 00:00:36.974390 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:36.976802 kubelet[3417]: E0117 00:00:36.974425 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:36.978883 kubelet[3417]: E0117 00:00:36.978614 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:36.978883 kubelet[3417]: W0117 00:00:36.978655 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:36.978883 kubelet[3417]: E0117 00:00:36.978737 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:36.981603 kubelet[3417]: E0117 00:00:36.979247 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:36.981603 kubelet[3417]: W0117 00:00:36.979271 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:36.981603 kubelet[3417]: E0117 00:00:36.979297 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:36.981603 kubelet[3417]: E0117 00:00:36.980320 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:36.981603 kubelet[3417]: W0117 00:00:36.980488 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:36.981603 kubelet[3417]: E0117 00:00:36.980602 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:36.981979 kubelet[3417]: E0117 00:00:36.981681 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:36.981979 kubelet[3417]: W0117 00:00:36.981706 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:36.981979 kubelet[3417]: E0117 00:00:36.981736 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:36.986591 kubelet[3417]: E0117 00:00:36.982734 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:36.986591 kubelet[3417]: W0117 00:00:36.982769 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:36.986591 kubelet[3417]: E0117 00:00:36.982802 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:36.991390 kubelet[3417]: E0117 00:00:36.990775 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:36.991390 kubelet[3417]: W0117 00:00:36.990812 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:36.991390 kubelet[3417]: E0117 00:00:36.990869 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:36.993800 kubelet[3417]: E0117 00:00:36.991357 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:36.998968 kubelet[3417]: W0117 00:00:36.993529 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:36.998968 kubelet[3417]: E0117 00:00:36.995286 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:36.999373 kubelet[3417]: E0117 00:00:36.999340 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.000257 kubelet[3417]: W0117 00:00:36.999486 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.000257 kubelet[3417]: E0117 00:00:36.999525 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.000257 kubelet[3417]: E0117 00:00:36.999952 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.000257 kubelet[3417]: W0117 00:00:36.999972 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.000257 kubelet[3417]: E0117 00:00:36.999994 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.000865 kubelet[3417]: E0117 00:00:37.000841 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.000967 kubelet[3417]: W0117 00:00:37.000943 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.001085 kubelet[3417]: E0117 00:00:37.001062 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.002250 kubelet[3417]: E0117 00:00:37.002213 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.002458 kubelet[3417]: W0117 00:00:37.002414 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.002585 kubelet[3417]: E0117 00:00:37.002561 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.005937 kubelet[3417]: E0117 00:00:37.005495 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.005937 kubelet[3417]: W0117 00:00:37.005531 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.005937 kubelet[3417]: E0117 00:00:37.005564 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.007389 kubelet[3417]: E0117 00:00:37.006969 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.007389 kubelet[3417]: W0117 00:00:37.007001 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.007389 kubelet[3417]: E0117 00:00:37.007033 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.009974 kubelet[3417]: E0117 00:00:37.009282 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.009974 kubelet[3417]: W0117 00:00:37.009316 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.009974 kubelet[3417]: E0117 00:00:37.009349 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.013123 kubelet[3417]: E0117 00:00:37.011509 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.013123 kubelet[3417]: W0117 00:00:37.011544 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.013123 kubelet[3417]: E0117 00:00:37.011577 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.015061 kubelet[3417]: E0117 00:00:37.014356 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.015435 kubelet[3417]: W0117 00:00:37.015397 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.017184 kubelet[3417]: E0117 00:00:37.015704 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.020558 kubelet[3417]: E0117 00:00:37.019902 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.020558 kubelet[3417]: W0117 00:00:37.019936 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.020558 kubelet[3417]: E0117 00:00:37.019969 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.027549 kubelet[3417]: E0117 00:00:37.025566 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.031537 kubelet[3417]: W0117 00:00:37.031475 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.036839 kubelet[3417]: E0117 00:00:37.036621 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.054421 kubelet[3417]: E0117 00:00:37.054371 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.056081 kubelet[3417]: W0117 00:00:37.055682 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.059625 kubelet[3417]: E0117 00:00:37.058488 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.059625 kubelet[3417]: W0117 00:00:37.058525 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.059625 kubelet[3417]: E0117 00:00:37.058558 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.059625 kubelet[3417]: E0117 00:00:37.058726 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.065646 kubelet[3417]: E0117 00:00:37.065604 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.066781 kubelet[3417]: W0117 00:00:37.066113 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.066781 kubelet[3417]: E0117 00:00:37.066164 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.066781 kubelet[3417]: I0117 00:00:37.066210 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7326267c-1eb2-4759-b98f-e8dc2742ecd4-kubelet-dir\") pod \"csi-node-driver-jjr5r\" (UID: \"7326267c-1eb2-4759-b98f-e8dc2742ecd4\") " pod="calico-system/csi-node-driver-jjr5r" Jan 17 00:00:37.072072 kubelet[3417]: E0117 00:00:37.071752 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.072072 kubelet[3417]: W0117 00:00:37.071789 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.072072 kubelet[3417]: E0117 00:00:37.071823 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.072072 kubelet[3417]: I0117 00:00:37.071867 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/7326267c-1eb2-4759-b98f-e8dc2742ecd4-registration-dir\") pod \"csi-node-driver-jjr5r\" (UID: \"7326267c-1eb2-4759-b98f-e8dc2742ecd4\") " pod="calico-system/csi-node-driver-jjr5r" Jan 17 00:00:37.075639 kubelet[3417]: E0117 00:00:37.075121 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.075639 kubelet[3417]: W0117 00:00:37.075173 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.075639 kubelet[3417]: E0117 00:00:37.075573 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.077230 kubelet[3417]: I0117 00:00:37.076813 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9cl6\" (UniqueName: \"kubernetes.io/projected/7326267c-1eb2-4759-b98f-e8dc2742ecd4-kube-api-access-s9cl6\") pod \"csi-node-driver-jjr5r\" (UID: \"7326267c-1eb2-4759-b98f-e8dc2742ecd4\") " pod="calico-system/csi-node-driver-jjr5r" Jan 17 00:00:37.078639 kubelet[3417]: E0117 00:00:37.078135 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.078639 kubelet[3417]: W0117 00:00:37.078171 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.078639 kubelet[3417]: E0117 00:00:37.078206 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.081172 kubelet[3417]: E0117 00:00:37.080924 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.083027 kubelet[3417]: W0117 00:00:37.080964 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.083027 kubelet[3417]: E0117 00:00:37.082030 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.087654 kubelet[3417]: E0117 00:00:37.086745 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.087654 kubelet[3417]: W0117 00:00:37.086780 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.087654 kubelet[3417]: E0117 00:00:37.086815 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.089476 kubelet[3417]: E0117 00:00:37.088642 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.089476 kubelet[3417]: W0117 00:00:37.088693 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.089476 kubelet[3417]: E0117 00:00:37.088728 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.097726 kubelet[3417]: E0117 00:00:37.096665 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.097726 kubelet[3417]: W0117 00:00:37.096717 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.097726 kubelet[3417]: E0117 00:00:37.096766 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.097726 kubelet[3417]: I0117 00:00:37.096815 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/7326267c-1eb2-4759-b98f-e8dc2742ecd4-socket-dir\") pod \"csi-node-driver-jjr5r\" (UID: \"7326267c-1eb2-4759-b98f-e8dc2742ecd4\") " pod="calico-system/csi-node-driver-jjr5r" Jan 17 00:00:37.099031 kubelet[3417]: E0117 00:00:37.098990 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.099985 kubelet[3417]: W0117 00:00:37.099622 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.102325 kubelet[3417]: E0117 00:00:37.100914 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.102325 kubelet[3417]: I0117 00:00:37.101155 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/7326267c-1eb2-4759-b98f-e8dc2742ecd4-varrun\") pod \"csi-node-driver-jjr5r\" (UID: \"7326267c-1eb2-4759-b98f-e8dc2742ecd4\") " pod="calico-system/csi-node-driver-jjr5r" Jan 17 00:00:37.105384 kubelet[3417]: E0117 00:00:37.103674 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.105384 kubelet[3417]: W0117 00:00:37.104491 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.107531 kubelet[3417]: E0117 00:00:37.105949 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.108535 kubelet[3417]: E0117 00:00:37.108498 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.108862 kubelet[3417]: W0117 00:00:37.108804 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.109659 kubelet[3417]: E0117 00:00:37.109415 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.111281 kubelet[3417]: E0117 00:00:37.111124 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.111281 kubelet[3417]: W0117 00:00:37.111161 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.111786 kubelet[3417]: E0117 00:00:37.111542 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.112232 kubelet[3417]: E0117 00:00:37.112136 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.112232 kubelet[3417]: W0117 00:00:37.112166 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.112232 kubelet[3417]: E0117 00:00:37.112196 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.113527 kubelet[3417]: E0117 00:00:37.112974 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.113527 kubelet[3417]: W0117 00:00:37.113004 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.113527 kubelet[3417]: E0117 00:00:37.113032 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.114817 kubelet[3417]: E0117 00:00:37.114584 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.114817 kubelet[3417]: W0117 00:00:37.114729 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.114817 kubelet[3417]: E0117 00:00:37.114767 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.204470 kubelet[3417]: E0117 00:00:37.204026 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.204470 kubelet[3417]: W0117 00:00:37.204065 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.204470 kubelet[3417]: E0117 00:00:37.204101 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.205727 kubelet[3417]: E0117 00:00:37.205352 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.205727 kubelet[3417]: W0117 00:00:37.205396 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.205727 kubelet[3417]: E0117 00:00:37.205429 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.211720 kubelet[3417]: E0117 00:00:37.210828 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.211720 kubelet[3417]: W0117 00:00:37.210866 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.211720 kubelet[3417]: E0117 00:00:37.210908 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.216158 kubelet[3417]: E0117 00:00:37.215630 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.216158 kubelet[3417]: W0117 00:00:37.215667 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.216158 kubelet[3417]: E0117 00:00:37.215701 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.219163 kubelet[3417]: E0117 00:00:37.217731 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.219163 kubelet[3417]: W0117 00:00:37.217765 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.219163 kubelet[3417]: E0117 00:00:37.218724 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.223258 kubelet[3417]: E0117 00:00:37.223040 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.226304 kubelet[3417]: W0117 00:00:37.225821 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.227016 kubelet[3417]: E0117 00:00:37.226512 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.228708 kubelet[3417]: E0117 00:00:37.228577 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.228708 kubelet[3417]: W0117 00:00:37.228634 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.230815 kubelet[3417]: E0117 00:00:37.230536 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.232872 kubelet[3417]: E0117 00:00:37.231908 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.232872 kubelet[3417]: W0117 00:00:37.231941 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.237359 kubelet[3417]: E0117 00:00:37.236820 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.237359 kubelet[3417]: E0117 00:00:37.237150 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.237359 kubelet[3417]: W0117 00:00:37.237172 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.240196 kubelet[3417]: E0117 00:00:37.238760 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.240196 kubelet[3417]: E0117 00:00:37.238958 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.240196 kubelet[3417]: W0117 00:00:37.239352 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.241735 kubelet[3417]: E0117 00:00:37.241120 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.243087 kubelet[3417]: E0117 00:00:37.242875 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.243087 kubelet[3417]: W0117 00:00:37.242909 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.245087 kubelet[3417]: E0117 00:00:37.244414 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.245087 kubelet[3417]: E0117 00:00:37.244897 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.245087 kubelet[3417]: W0117 00:00:37.244920 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.245838 kubelet[3417]: E0117 00:00:37.245654 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.246690 kubelet[3417]: E0117 00:00:37.246352 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.246690 kubelet[3417]: W0117 00:00:37.246379 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.247838 kubelet[3417]: E0117 00:00:37.247433 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.250201 kubelet[3417]: E0117 00:00:37.248625 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.250201 kubelet[3417]: W0117 00:00:37.248662 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.250201 kubelet[3417]: E0117 00:00:37.249959 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.252293 kubelet[3417]: E0117 00:00:37.252255 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.256256 kubelet[3417]: W0117 00:00:37.255991 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.258164 containerd[2150]: time="2026-01-17T00:00:37.257905303Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7868898746-9x4vb,Uid:478ed9ef-fdf5-4c11-a5fa-63600d79f09c,Namespace:calico-system,Attempt:0,} returns sandbox id \"ccccd471b4c15918a1c9b5dd3e2a85074291ea6b11ae284abe44e1cda24b577b\"" Jan 17 00:00:37.258712 kubelet[3417]: E0117 00:00:37.258371 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.259291 kubelet[3417]: E0117 00:00:37.258985 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.259291 kubelet[3417]: W0117 00:00:37.259010 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.259770 kubelet[3417]: E0117 00:00:37.259742 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.262032 kubelet[3417]: W0117 00:00:37.261634 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.263082 kubelet[3417]: E0117 00:00:37.259839 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.263082 kubelet[3417]: E0117 00:00:37.262731 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.264058 kubelet[3417]: E0117 00:00:37.263733 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.264058 kubelet[3417]: W0117 00:00:37.263761 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.265772 kubelet[3417]: E0117 00:00:37.265120 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.265772 kubelet[3417]: W0117 00:00:37.265152 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.265948 containerd[2150]: time="2026-01-17T00:00:37.265557091Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 17 00:00:37.268243 kubelet[3417]: E0117 00:00:37.267802 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.268243 kubelet[3417]: E0117 00:00:37.267994 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.268243 kubelet[3417]: W0117 00:00:37.268014 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.268725 kubelet[3417]: E0117 00:00:37.267714 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.270201 kubelet[3417]: E0117 00:00:37.269518 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.270201 kubelet[3417]: E0117 00:00:37.269935 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.270201 kubelet[3417]: W0117 00:00:37.269956 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.270201 kubelet[3417]: E0117 00:00:37.270020 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.271870 kubelet[3417]: E0117 00:00:37.271683 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.271870 kubelet[3417]: W0117 00:00:37.271715 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.271870 kubelet[3417]: E0117 00:00:37.271807 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.272390 kubelet[3417]: E0117 00:00:37.272293 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.272390 kubelet[3417]: W0117 00:00:37.272385 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.273419 kubelet[3417]: E0117 00:00:37.272636 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.274695 kubelet[3417]: E0117 00:00:37.274639 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.274695 kubelet[3417]: W0117 00:00:37.274679 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.277365 kubelet[3417]: E0117 00:00:37.277316 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.278123 kubelet[3417]: E0117 00:00:37.278067 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.278123 kubelet[3417]: W0117 00:00:37.278114 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.278376 kubelet[3417]: E0117 00:00:37.278149 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.303639 kubelet[3417]: E0117 00:00:37.303556 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:37.303639 kubelet[3417]: W0117 00:00:37.303594 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:37.303639 kubelet[3417]: E0117 00:00:37.303633 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:37.934909 kubelet[3417]: E0117 00:00:37.934494 3417 secret.go:189] Couldn't get secret calico-system/node-certs: failed to sync secret cache: timed out waiting for the condition Jan 17 00:00:37.934909 kubelet[3417]: E0117 00:00:37.934613 3417 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/802fe422-66a7-48d5-8525-6ea9bd3c886a-node-certs podName:802fe422-66a7-48d5-8525-6ea9bd3c886a nodeName:}" failed. No retries permitted until 2026-01-17 00:00:38.434582654 +0000 UTC m=+35.563592727 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "node-certs" (UniqueName: "kubernetes.io/secret/802fe422-66a7-48d5-8525-6ea9bd3c886a-node-certs") pod "calico-node-ckhl9" (UID: "802fe422-66a7-48d5-8525-6ea9bd3c886a") : failed to sync secret cache: timed out waiting for the condition Jan 17 00:00:38.025400 kubelet[3417]: E0117 00:00:38.025356 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:38.025400 kubelet[3417]: W0117 00:00:38.025390 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:38.025632 kubelet[3417]: E0117 00:00:38.025422 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:38.116769 kubelet[3417]: E0117 00:00:38.116700 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jjr5r" podUID="7326267c-1eb2-4759-b98f-e8dc2742ecd4" Jan 17 00:00:38.126884 kubelet[3417]: E0117 00:00:38.126722 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:38.126884 kubelet[3417]: W0117 00:00:38.126784 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:38.126884 kubelet[3417]: E0117 00:00:38.126815 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:38.229008 kubelet[3417]: E0117 00:00:38.228782 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:38.229008 kubelet[3417]: W0117 00:00:38.228827 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:38.229008 kubelet[3417]: E0117 00:00:38.228860 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:38.331372 kubelet[3417]: E0117 00:00:38.330817 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:38.331372 kubelet[3417]: W0117 00:00:38.330851 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:38.331372 kubelet[3417]: E0117 00:00:38.330882 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:38.432218 kubelet[3417]: E0117 00:00:38.432161 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:38.432218 kubelet[3417]: W0117 00:00:38.432200 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:38.432429 kubelet[3417]: E0117 00:00:38.432233 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:38.533914 kubelet[3417]: E0117 00:00:38.533534 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:38.533914 kubelet[3417]: W0117 00:00:38.533564 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:38.533914 kubelet[3417]: E0117 00:00:38.533594 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:38.534656 kubelet[3417]: E0117 00:00:38.534631 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:38.534868 kubelet[3417]: W0117 00:00:38.534745 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:38.534868 kubelet[3417]: E0117 00:00:38.534778 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:38.535434 kubelet[3417]: E0117 00:00:38.535324 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:38.535434 kubelet[3417]: W0117 00:00:38.535346 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:38.535434 kubelet[3417]: E0117 00:00:38.535368 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:38.536075 kubelet[3417]: E0117 00:00:38.535914 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:38.536075 kubelet[3417]: W0117 00:00:38.535935 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:38.536075 kubelet[3417]: E0117 00:00:38.535956 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:38.537016 kubelet[3417]: E0117 00:00:38.536561 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:38.537016 kubelet[3417]: W0117 00:00:38.536583 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:38.537016 kubelet[3417]: E0117 00:00:38.536604 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:38.549889 kubelet[3417]: E0117 00:00:38.549859 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:38.550632 kubelet[3417]: W0117 00:00:38.550052 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:38.550632 kubelet[3417]: E0117 00:00:38.550089 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:38.821328 containerd[2150]: time="2026-01-17T00:00:38.821122018Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ckhl9,Uid:802fe422-66a7-48d5-8525-6ea9bd3c886a,Namespace:calico-system,Attempt:0,}" Jan 17 00:00:38.873257 containerd[2150]: time="2026-01-17T00:00:38.872853419Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:00:38.874180 containerd[2150]: time="2026-01-17T00:00:38.873885707Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:00:38.874180 containerd[2150]: time="2026-01-17T00:00:38.873923735Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:00:38.876556 containerd[2150]: time="2026-01-17T00:00:38.874467911Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:00:38.950944 containerd[2150]: time="2026-01-17T00:00:38.950870207Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-ckhl9,Uid:802fe422-66a7-48d5-8525-6ea9bd3c886a,Namespace:calico-system,Attempt:0,} returns sandbox id \"aa529ec0d01c17675ef8c55fba123a13aef9b6eb8cab7826fa92ffba1ef1eec3\"" Jan 17 00:00:40.116833 kubelet[3417]: E0117 00:00:40.116734 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jjr5r" podUID="7326267c-1eb2-4759-b98f-e8dc2742ecd4" Jan 17 00:00:42.116726 kubelet[3417]: E0117 00:00:42.116632 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jjr5r" podUID="7326267c-1eb2-4759-b98f-e8dc2742ecd4" Jan 17 00:00:44.116540 kubelet[3417]: E0117 00:00:44.116340 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jjr5r" podUID="7326267c-1eb2-4759-b98f-e8dc2742ecd4" Jan 17 00:00:44.467295 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1689312882.mount: Deactivated successfully. Jan 17 00:00:45.783517 containerd[2150]: time="2026-01-17T00:00:45.783421097Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:45.785185 containerd[2150]: time="2026-01-17T00:00:45.784997021Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Jan 17 00:00:45.788021 containerd[2150]: time="2026-01-17T00:00:45.786428885Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:45.790435 containerd[2150]: time="2026-01-17T00:00:45.790386089Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:45.791856 containerd[2150]: time="2026-01-17T00:00:45.791795849Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 8.526155994s" Jan 17 00:00:45.792001 containerd[2150]: time="2026-01-17T00:00:45.791972093Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 17 00:00:45.795729 containerd[2150]: time="2026-01-17T00:00:45.795669413Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 17 00:00:45.832042 containerd[2150]: time="2026-01-17T00:00:45.831975197Z" level=info msg="CreateContainer within sandbox \"ccccd471b4c15918a1c9b5dd3e2a85074291ea6b11ae284abe44e1cda24b577b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 17 00:00:45.862085 containerd[2150]: time="2026-01-17T00:00:45.862006493Z" level=info msg="CreateContainer within sandbox \"ccccd471b4c15918a1c9b5dd3e2a85074291ea6b11ae284abe44e1cda24b577b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"d5bd26d19ebd2c3d3217f7f706e84acf30b435f4705f85eec5ce609df76600ef\"" Jan 17 00:00:45.864018 containerd[2150]: time="2026-01-17T00:00:45.863838845Z" level=info msg="StartContainer for \"d5bd26d19ebd2c3d3217f7f706e84acf30b435f4705f85eec5ce609df76600ef\"" Jan 17 00:00:45.981868 containerd[2150]: time="2026-01-17T00:00:45.981800178Z" level=info msg="StartContainer for \"d5bd26d19ebd2c3d3217f7f706e84acf30b435f4705f85eec5ce609df76600ef\" returns successfully" Jan 17 00:00:46.118490 kubelet[3417]: E0117 00:00:46.117763 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jjr5r" podUID="7326267c-1eb2-4759-b98f-e8dc2742ecd4" Jan 17 00:00:46.424404 kubelet[3417]: I0117 00:00:46.424253 3417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7868898746-9x4vb" podStartSLOduration=1.893959078 podStartE2EDuration="10.424230904s" podCreationTimestamp="2026-01-17 00:00:36 +0000 UTC" firstStartedPulling="2026-01-17 00:00:37.263209531 +0000 UTC m=+34.392219616" lastFinishedPulling="2026-01-17 00:00:45.793481285 +0000 UTC m=+42.922491442" observedRunningTime="2026-01-17 00:00:46.42286474 +0000 UTC m=+43.551874909" watchObservedRunningTime="2026-01-17 00:00:46.424230904 +0000 UTC m=+43.553240977" Jan 17 00:00:46.435936 kubelet[3417]: E0117 00:00:46.435627 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:46.435936 kubelet[3417]: W0117 00:00:46.435665 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:46.435936 kubelet[3417]: E0117 00:00:46.435699 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:46.438096 kubelet[3417]: E0117 00:00:46.437867 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:46.438096 kubelet[3417]: W0117 00:00:46.437925 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:46.438096 kubelet[3417]: E0117 00:00:46.438028 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:46.441528 kubelet[3417]: E0117 00:00:46.440533 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:46.441528 kubelet[3417]: W0117 00:00:46.440591 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:46.441528 kubelet[3417]: E0117 00:00:46.440624 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:46.443297 kubelet[3417]: E0117 00:00:46.442770 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:46.443297 kubelet[3417]: W0117 00:00:46.442807 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:46.443297 kubelet[3417]: E0117 00:00:46.442841 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:46.446696 kubelet[3417]: E0117 00:00:46.445234 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:46.446696 kubelet[3417]: W0117 00:00:46.445270 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:46.446696 kubelet[3417]: E0117 00:00:46.445302 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:46.447934 kubelet[3417]: E0117 00:00:46.447112 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:46.447934 kubelet[3417]: W0117 00:00:46.447144 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:46.447934 kubelet[3417]: E0117 00:00:46.447176 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:46.449557 kubelet[3417]: E0117 00:00:46.449279 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:46.449557 kubelet[3417]: W0117 00:00:46.449341 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:46.449557 kubelet[3417]: E0117 00:00:46.449375 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:46.450508 kubelet[3417]: E0117 00:00:46.450412 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:46.450786 kubelet[3417]: W0117 00:00:46.450616 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:46.450786 kubelet[3417]: E0117 00:00:46.450653 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:46.452214 kubelet[3417]: E0117 00:00:46.452180 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:46.452596 kubelet[3417]: W0117 00:00:46.452348 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:46.452596 kubelet[3417]: E0117 00:00:46.452386 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:46.453193 kubelet[3417]: E0117 00:00:46.452963 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:46.453193 kubelet[3417]: W0117 00:00:46.452998 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:46.453193 kubelet[3417]: E0117 00:00:46.453028 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:46.453898 kubelet[3417]: E0117 00:00:46.453865 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:46.454246 kubelet[3417]: W0117 00:00:46.454024 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:46.454246 kubelet[3417]: E0117 00:00:46.454063 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:46.454733 kubelet[3417]: E0117 00:00:46.454704 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:46.454979 kubelet[3417]: W0117 00:00:46.454838 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:46.454979 kubelet[3417]: E0117 00:00:46.454874 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:46.455831 kubelet[3417]: E0117 00:00:46.455568 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:46.456582 kubelet[3417]: W0117 00:00:46.456421 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:46.457395 kubelet[3417]: E0117 00:00:46.456922 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:46.463211 kubelet[3417]: E0117 00:00:46.463043 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:46.463701 kubelet[3417]: W0117 00:00:46.463386 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:46.463701 kubelet[3417]: E0117 00:00:46.463429 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:46.464796 kubelet[3417]: E0117 00:00:46.464622 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:46.464796 kubelet[3417]: W0117 00:00:46.464677 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:46.464796 kubelet[3417]: E0117 00:00:46.464713 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:46.490494 kubelet[3417]: E0117 00:00:46.488599 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:46.490494 kubelet[3417]: W0117 00:00:46.488635 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:46.490494 kubelet[3417]: E0117 00:00:46.488693 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:46.493477 kubelet[3417]: E0117 00:00:46.492110 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:46.493709 kubelet[3417]: W0117 00:00:46.493672 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:46.494911 kubelet[3417]: E0117 00:00:46.494322 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:46.496002 kubelet[3417]: E0117 00:00:46.495364 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:46.496002 kubelet[3417]: W0117 00:00:46.495398 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:46.496002 kubelet[3417]: E0117 00:00:46.495638 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:46.496002 kubelet[3417]: E0117 00:00:46.495891 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:46.496002 kubelet[3417]: W0117 00:00:46.495908 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:46.496319 kubelet[3417]: E0117 00:00:46.496016 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:46.498523 kubelet[3417]: E0117 00:00:46.497227 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:46.498523 kubelet[3417]: W0117 00:00:46.497266 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:46.498523 kubelet[3417]: E0117 00:00:46.497395 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:46.498523 kubelet[3417]: E0117 00:00:46.497830 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:46.498523 kubelet[3417]: W0117 00:00:46.497850 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:46.498523 kubelet[3417]: E0117 00:00:46.498094 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:46.498978 kubelet[3417]: E0117 00:00:46.498637 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:46.498978 kubelet[3417]: W0117 00:00:46.498661 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:46.499089 kubelet[3417]: E0117 00:00:46.499036 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:46.501429 kubelet[3417]: E0117 00:00:46.499629 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:46.501429 kubelet[3417]: W0117 00:00:46.499664 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:46.501429 kubelet[3417]: E0117 00:00:46.500039 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:46.501429 kubelet[3417]: W0117 00:00:46.500057 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:46.501429 kubelet[3417]: E0117 00:00:46.500412 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:46.501429 kubelet[3417]: E0117 00:00:46.500475 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:46.501429 kubelet[3417]: E0117 00:00:46.500585 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:46.501429 kubelet[3417]: W0117 00:00:46.500602 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:46.501429 kubelet[3417]: E0117 00:00:46.500664 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:46.501988 kubelet[3417]: E0117 00:00:46.501732 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:46.501988 kubelet[3417]: W0117 00:00:46.501758 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:46.501988 kubelet[3417]: E0117 00:00:46.501965 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:46.503691 kubelet[3417]: E0117 00:00:46.502178 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:46.503691 kubelet[3417]: W0117 00:00:46.502209 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:46.503691 kubelet[3417]: E0117 00:00:46.502240 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:46.504319 kubelet[3417]: E0117 00:00:46.503839 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:46.504319 kubelet[3417]: W0117 00:00:46.503867 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:46.504319 kubelet[3417]: E0117 00:00:46.503995 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:46.505049 kubelet[3417]: E0117 00:00:46.505027 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:46.505115 kubelet[3417]: W0117 00:00:46.505059 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:46.506874 kubelet[3417]: E0117 00:00:46.505608 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:46.506874 kubelet[3417]: W0117 00:00:46.505643 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:46.506874 kubelet[3417]: E0117 00:00:46.506240 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:46.506874 kubelet[3417]: E0117 00:00:46.506737 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:46.507224 kubelet[3417]: E0117 00:00:46.507164 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:46.507224 kubelet[3417]: W0117 00:00:46.507184 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:46.507340 kubelet[3417]: E0117 00:00:46.507251 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:46.508012 kubelet[3417]: E0117 00:00:46.507964 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:46.508012 kubelet[3417]: W0117 00:00:46.508001 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:46.510105 kubelet[3417]: E0117 00:00:46.508046 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:46.510105 kubelet[3417]: E0117 00:00:46.508877 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:46.510105 kubelet[3417]: W0117 00:00:46.508903 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:46.510105 kubelet[3417]: E0117 00:00:46.508934 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:47.472731 kubelet[3417]: E0117 00:00:47.472498 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:47.472731 kubelet[3417]: W0117 00:00:47.472535 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:47.472731 kubelet[3417]: E0117 00:00:47.472566 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:47.473784 kubelet[3417]: E0117 00:00:47.473559 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:47.473784 kubelet[3417]: W0117 00:00:47.473587 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:47.473784 kubelet[3417]: E0117 00:00:47.473614 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:47.474269 kubelet[3417]: E0117 00:00:47.473983 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:47.474269 kubelet[3417]: W0117 00:00:47.474000 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:47.474269 kubelet[3417]: E0117 00:00:47.474020 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:47.474739 kubelet[3417]: E0117 00:00:47.474539 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:47.474739 kubelet[3417]: W0117 00:00:47.474562 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:47.474739 kubelet[3417]: E0117 00:00:47.474583 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:47.474988 kubelet[3417]: E0117 00:00:47.474969 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:47.475247 kubelet[3417]: W0117 00:00:47.475077 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:47.475247 kubelet[3417]: E0117 00:00:47.475103 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:47.475500 kubelet[3417]: E0117 00:00:47.475480 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:47.475594 kubelet[3417]: W0117 00:00:47.475574 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:47.475692 kubelet[3417]: E0117 00:00:47.475672 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:47.476237 kubelet[3417]: E0117 00:00:47.476059 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:47.476237 kubelet[3417]: W0117 00:00:47.476080 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:47.476237 kubelet[3417]: E0117 00:00:47.476099 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:47.476544 kubelet[3417]: E0117 00:00:47.476523 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:47.476638 kubelet[3417]: W0117 00:00:47.476618 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:47.476899 kubelet[3417]: E0117 00:00:47.476732 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:47.477253 kubelet[3417]: E0117 00:00:47.477073 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:47.477253 kubelet[3417]: W0117 00:00:47.477093 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:47.477253 kubelet[3417]: E0117 00:00:47.477112 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:47.477582 kubelet[3417]: E0117 00:00:47.477562 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:47.477677 kubelet[3417]: W0117 00:00:47.477657 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:47.477778 kubelet[3417]: E0117 00:00:47.477755 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:47.478163 kubelet[3417]: E0117 00:00:47.478142 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:47.478415 kubelet[3417]: W0117 00:00:47.478244 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:47.478415 kubelet[3417]: E0117 00:00:47.478271 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:47.478677 kubelet[3417]: E0117 00:00:47.478658 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:47.478777 kubelet[3417]: W0117 00:00:47.478755 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:47.479013 kubelet[3417]: E0117 00:00:47.478856 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:47.479359 kubelet[3417]: E0117 00:00:47.479178 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:47.479359 kubelet[3417]: W0117 00:00:47.479198 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:47.479359 kubelet[3417]: E0117 00:00:47.479217 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:47.479728 kubelet[3417]: E0117 00:00:47.479672 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:47.479826 kubelet[3417]: W0117 00:00:47.479802 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:47.479926 kubelet[3417]: E0117 00:00:47.479905 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:47.480322 kubelet[3417]: E0117 00:00:47.480301 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:47.480555 kubelet[3417]: W0117 00:00:47.480412 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:47.480555 kubelet[3417]: E0117 00:00:47.480471 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:47.503821 kubelet[3417]: E0117 00:00:47.503783 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:47.503821 kubelet[3417]: W0117 00:00:47.503818 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:47.504020 kubelet[3417]: E0117 00:00:47.503848 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:47.504242 kubelet[3417]: E0117 00:00:47.504215 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:47.504315 kubelet[3417]: W0117 00:00:47.504242 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:47.504315 kubelet[3417]: E0117 00:00:47.504277 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:47.504822 kubelet[3417]: E0117 00:00:47.504793 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:47.504909 kubelet[3417]: W0117 00:00:47.504822 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:47.504909 kubelet[3417]: E0117 00:00:47.504858 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:47.505269 kubelet[3417]: E0117 00:00:47.505242 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:47.505347 kubelet[3417]: W0117 00:00:47.505272 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:47.505347 kubelet[3417]: E0117 00:00:47.505306 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:47.505669 kubelet[3417]: E0117 00:00:47.505642 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:47.505742 kubelet[3417]: W0117 00:00:47.505669 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:47.505742 kubelet[3417]: E0117 00:00:47.505771 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:47.505990 kubelet[3417]: E0117 00:00:47.505965 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:47.506061 kubelet[3417]: W0117 00:00:47.505989 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:47.506061 kubelet[3417]: E0117 00:00:47.506090 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:47.506310 kubelet[3417]: E0117 00:00:47.506284 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:47.506382 kubelet[3417]: W0117 00:00:47.506309 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:47.506382 kubelet[3417]: E0117 00:00:47.506407 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:47.506669 kubelet[3417]: E0117 00:00:47.506645 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:47.506742 kubelet[3417]: W0117 00:00:47.506671 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:47.506742 kubelet[3417]: E0117 00:00:47.506704 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:47.507057 kubelet[3417]: E0117 00:00:47.507031 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:47.507130 kubelet[3417]: W0117 00:00:47.507061 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:47.507130 kubelet[3417]: E0117 00:00:47.507095 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:47.507729 kubelet[3417]: E0117 00:00:47.507603 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:47.507729 kubelet[3417]: W0117 00:00:47.507627 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:47.507729 kubelet[3417]: E0117 00:00:47.507667 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:47.508416 kubelet[3417]: E0117 00:00:47.508260 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:47.508416 kubelet[3417]: W0117 00:00:47.508282 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:47.508416 kubelet[3417]: E0117 00:00:47.508365 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:47.509126 kubelet[3417]: E0117 00:00:47.508956 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:47.509126 kubelet[3417]: W0117 00:00:47.508978 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:47.509360 kubelet[3417]: E0117 00:00:47.509245 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:47.510299 kubelet[3417]: E0117 00:00:47.509690 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:47.510299 kubelet[3417]: W0117 00:00:47.509712 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:47.510299 kubelet[3417]: E0117 00:00:47.509766 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:47.511788 kubelet[3417]: E0117 00:00:47.511528 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:47.511788 kubelet[3417]: W0117 00:00:47.511556 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:47.512404 kubelet[3417]: E0117 00:00:47.512007 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:47.512860 kubelet[3417]: E0117 00:00:47.512835 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:47.513122 kubelet[3417]: W0117 00:00:47.513096 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:47.513493 kubelet[3417]: E0117 00:00:47.513226 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:47.513808 kubelet[3417]: E0117 00:00:47.513786 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:47.513907 kubelet[3417]: W0117 00:00:47.513885 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:47.514054 kubelet[3417]: E0117 00:00:47.514031 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:47.515126 kubelet[3417]: E0117 00:00:47.514548 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:47.515126 kubelet[3417]: W0117 00:00:47.514585 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:47.515126 kubelet[3417]: E0117 00:00:47.514612 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:47.515572 kubelet[3417]: E0117 00:00:47.515546 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:47.515673 kubelet[3417]: W0117 00:00:47.515650 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:47.515782 kubelet[3417]: E0117 00:00:47.515760 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:48.117379 kubelet[3417]: E0117 00:00:48.116767 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jjr5r" podUID="7326267c-1eb2-4759-b98f-e8dc2742ecd4" Jan 17 00:00:48.486496 kubelet[3417]: E0117 00:00:48.486411 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:48.486496 kubelet[3417]: W0117 00:00:48.486476 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:48.487102 kubelet[3417]: E0117 00:00:48.486511 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:48.487102 kubelet[3417]: E0117 00:00:48.486956 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:48.487102 kubelet[3417]: W0117 00:00:48.486975 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:48.487102 kubelet[3417]: E0117 00:00:48.486998 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:48.487502 kubelet[3417]: E0117 00:00:48.487431 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:48.487571 kubelet[3417]: W0117 00:00:48.487503 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:48.487571 kubelet[3417]: E0117 00:00:48.487527 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:48.487996 kubelet[3417]: E0117 00:00:48.487954 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:48.487996 kubelet[3417]: W0117 00:00:48.487982 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:48.488136 kubelet[3417]: E0117 00:00:48.488027 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:48.488425 kubelet[3417]: E0117 00:00:48.488398 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:48.488546 kubelet[3417]: W0117 00:00:48.488516 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:48.488607 kubelet[3417]: E0117 00:00:48.488553 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:48.488978 kubelet[3417]: E0117 00:00:48.488951 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:48.489065 kubelet[3417]: W0117 00:00:48.488978 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:48.489065 kubelet[3417]: E0117 00:00:48.488999 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:48.489386 kubelet[3417]: E0117 00:00:48.489359 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:48.489498 kubelet[3417]: W0117 00:00:48.489385 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:48.489498 kubelet[3417]: E0117 00:00:48.489409 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:48.489924 kubelet[3417]: E0117 00:00:48.489883 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:48.490018 kubelet[3417]: W0117 00:00:48.489923 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:48.490018 kubelet[3417]: E0117 00:00:48.489950 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:48.490385 kubelet[3417]: E0117 00:00:48.490320 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:48.490385 kubelet[3417]: W0117 00:00:48.490370 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:48.490532 kubelet[3417]: E0117 00:00:48.490392 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:48.490857 kubelet[3417]: E0117 00:00:48.490829 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:48.490947 kubelet[3417]: W0117 00:00:48.490855 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:48.490947 kubelet[3417]: E0117 00:00:48.490877 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:48.491280 kubelet[3417]: E0117 00:00:48.491233 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:48.491339 kubelet[3417]: W0117 00:00:48.491280 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:48.491339 kubelet[3417]: E0117 00:00:48.491302 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:48.491835 kubelet[3417]: E0117 00:00:48.491806 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:48.491927 kubelet[3417]: W0117 00:00:48.491834 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:48.491927 kubelet[3417]: E0117 00:00:48.491857 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:48.492315 kubelet[3417]: E0117 00:00:48.492288 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:48.492387 kubelet[3417]: W0117 00:00:48.492315 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:48.492387 kubelet[3417]: E0117 00:00:48.492338 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:48.492758 kubelet[3417]: E0117 00:00:48.492730 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:48.492820 kubelet[3417]: W0117 00:00:48.492757 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:48.492820 kubelet[3417]: E0117 00:00:48.492778 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:48.493110 kubelet[3417]: E0117 00:00:48.493084 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:48.493169 kubelet[3417]: W0117 00:00:48.493109 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:48.493169 kubelet[3417]: E0117 00:00:48.493130 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:48.512780 kubelet[3417]: E0117 00:00:48.512730 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:48.512780 kubelet[3417]: W0117 00:00:48.512763 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:48.513062 kubelet[3417]: E0117 00:00:48.512790 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:48.513191 kubelet[3417]: E0117 00:00:48.513163 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:48.513264 kubelet[3417]: W0117 00:00:48.513190 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:48.513264 kubelet[3417]: E0117 00:00:48.513225 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:48.513629 kubelet[3417]: E0117 00:00:48.513600 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:48.513708 kubelet[3417]: W0117 00:00:48.513629 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:48.513708 kubelet[3417]: E0117 00:00:48.513664 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:48.514124 kubelet[3417]: E0117 00:00:48.514050 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:48.514202 kubelet[3417]: W0117 00:00:48.514124 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:48.514202 kubelet[3417]: E0117 00:00:48.514161 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:48.514700 kubelet[3417]: E0117 00:00:48.514670 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:48.514700 kubelet[3417]: W0117 00:00:48.514698 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:48.514954 kubelet[3417]: E0117 00:00:48.514831 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:48.515141 kubelet[3417]: E0117 00:00:48.515113 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:48.515266 kubelet[3417]: W0117 00:00:48.515139 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:48.515431 kubelet[3417]: E0117 00:00:48.515373 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:48.515754 kubelet[3417]: E0117 00:00:48.515726 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:48.515828 kubelet[3417]: W0117 00:00:48.515773 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:48.515992 kubelet[3417]: E0117 00:00:48.515903 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:48.516186 kubelet[3417]: E0117 00:00:48.516159 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:48.516258 kubelet[3417]: W0117 00:00:48.516184 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:48.516258 kubelet[3417]: E0117 00:00:48.516217 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:48.516767 kubelet[3417]: E0117 00:00:48.516739 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:48.516854 kubelet[3417]: W0117 00:00:48.516767 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:48.516854 kubelet[3417]: E0117 00:00:48.516804 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:48.517629 kubelet[3417]: E0117 00:00:48.517392 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:48.517629 kubelet[3417]: W0117 00:00:48.517416 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:48.517629 kubelet[3417]: E0117 00:00:48.517483 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:48.518042 kubelet[3417]: E0117 00:00:48.517925 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:48.518042 kubelet[3417]: W0117 00:00:48.517944 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:48.518042 kubelet[3417]: E0117 00:00:48.517987 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:48.518707 kubelet[3417]: E0117 00:00:48.518591 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:48.518707 kubelet[3417]: W0117 00:00:48.518612 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:48.518707 kubelet[3417]: E0117 00:00:48.518654 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:48.519309 kubelet[3417]: E0117 00:00:48.519176 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:48.519309 kubelet[3417]: W0117 00:00:48.519196 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:48.519309 kubelet[3417]: E0117 00:00:48.519237 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:48.519808 kubelet[3417]: E0117 00:00:48.519789 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:48.519974 kubelet[3417]: W0117 00:00:48.519885 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:48.519974 kubelet[3417]: E0117 00:00:48.519933 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:48.520536 kubelet[3417]: E0117 00:00:48.520412 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:48.520536 kubelet[3417]: W0117 00:00:48.520489 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:48.521132 kubelet[3417]: E0117 00:00:48.521027 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:48.521497 kubelet[3417]: E0117 00:00:48.521240 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:48.521497 kubelet[3417]: W0117 00:00:48.521261 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:48.521497 kubelet[3417]: E0117 00:00:48.521297 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:48.521904 kubelet[3417]: E0117 00:00:48.521884 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:48.522000 kubelet[3417]: W0117 00:00:48.521980 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:48.522126 kubelet[3417]: E0117 00:00:48.522104 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:48.522930 kubelet[3417]: E0117 00:00:48.522844 3417 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 17 00:00:48.522930 kubelet[3417]: W0117 00:00:48.522866 3417 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 17 00:00:48.522930 kubelet[3417]: E0117 00:00:48.522888 3417 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 17 00:00:50.116934 kubelet[3417]: E0117 00:00:50.116822 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jjr5r" podUID="7326267c-1eb2-4759-b98f-e8dc2742ecd4" Jan 17 00:00:52.117168 kubelet[3417]: E0117 00:00:52.116676 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jjr5r" podUID="7326267c-1eb2-4759-b98f-e8dc2742ecd4" Jan 17 00:00:52.229485 containerd[2150]: time="2026-01-17T00:00:52.227245725Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:52.229485 containerd[2150]: time="2026-01-17T00:00:52.228616509Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Jan 17 00:00:52.230508 containerd[2150]: time="2026-01-17T00:00:52.230422425Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:52.237231 containerd[2150]: time="2026-01-17T00:00:52.237156717Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:52.239818 containerd[2150]: time="2026-01-17T00:00:52.238935741Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 6.44320274s" Jan 17 00:00:52.239818 containerd[2150]: time="2026-01-17T00:00:52.238998765Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 17 00:00:52.246122 containerd[2150]: time="2026-01-17T00:00:52.246073557Z" level=info msg="CreateContainer within sandbox \"aa529ec0d01c17675ef8c55fba123a13aef9b6eb8cab7826fa92ffba1ef1eec3\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 17 00:00:52.271119 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3125104275.mount: Deactivated successfully. Jan 17 00:00:52.275426 containerd[2150]: time="2026-01-17T00:00:52.275358873Z" level=info msg="CreateContainer within sandbox \"aa529ec0d01c17675ef8c55fba123a13aef9b6eb8cab7826fa92ffba1ef1eec3\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7e97cba060d0e4986f0fff8c5990118f38f5e6528f99caf3cb6bcf38f7936016\"" Jan 17 00:00:52.277384 containerd[2150]: time="2026-01-17T00:00:52.277124061Z" level=info msg="StartContainer for \"7e97cba060d0e4986f0fff8c5990118f38f5e6528f99caf3cb6bcf38f7936016\"" Jan 17 00:00:52.394506 containerd[2150]: time="2026-01-17T00:00:52.393779458Z" level=info msg="StartContainer for \"7e97cba060d0e4986f0fff8c5990118f38f5e6528f99caf3cb6bcf38f7936016\" returns successfully" Jan 17 00:00:52.482017 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e97cba060d0e4986f0fff8c5990118f38f5e6528f99caf3cb6bcf38f7936016-rootfs.mount: Deactivated successfully. Jan 17 00:00:52.654544 containerd[2150]: time="2026-01-17T00:00:52.654293315Z" level=info msg="shim disconnected" id=7e97cba060d0e4986f0fff8c5990118f38f5e6528f99caf3cb6bcf38f7936016 namespace=k8s.io Jan 17 00:00:52.654544 containerd[2150]: time="2026-01-17T00:00:52.654367979Z" level=warning msg="cleaning up after shim disconnected" id=7e97cba060d0e4986f0fff8c5990118f38f5e6528f99caf3cb6bcf38f7936016 namespace=k8s.io Jan 17 00:00:52.654544 containerd[2150]: time="2026-01-17T00:00:52.654388451Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:00:53.432818 containerd[2150]: time="2026-01-17T00:00:53.432481943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 17 00:00:54.117426 kubelet[3417]: E0117 00:00:54.116111 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jjr5r" podUID="7326267c-1eb2-4759-b98f-e8dc2742ecd4" Jan 17 00:00:56.116652 kubelet[3417]: E0117 00:00:56.116575 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jjr5r" podUID="7326267c-1eb2-4759-b98f-e8dc2742ecd4" Jan 17 00:00:58.116078 kubelet[3417]: E0117 00:00:58.116016 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jjr5r" podUID="7326267c-1eb2-4759-b98f-e8dc2742ecd4" Jan 17 00:00:59.127141 systemd[1]: Started sshd@7-172.31.23.167:22-68.220.241.50:44632.service - OpenSSH per-connection server daemon (68.220.241.50:44632). Jan 17 00:00:59.598292 containerd[2150]: time="2026-01-17T00:00:59.598224678Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:59.599922 containerd[2150]: time="2026-01-17T00:00:59.599826066Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 17 00:00:59.601241 containerd[2150]: time="2026-01-17T00:00:59.601165206Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:59.609477 containerd[2150]: time="2026-01-17T00:00:59.608070642Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:00:59.609705 containerd[2150]: time="2026-01-17T00:00:59.609663150Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 6.177116887s" Jan 17 00:00:59.609825 containerd[2150]: time="2026-01-17T00:00:59.609797286Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 17 00:00:59.616835 containerd[2150]: time="2026-01-17T00:00:59.616704690Z" level=info msg="CreateContainer within sandbox \"aa529ec0d01c17675ef8c55fba123a13aef9b6eb8cab7826fa92ffba1ef1eec3\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 17 00:00:59.649825 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount221513212.mount: Deactivated successfully. Jan 17 00:00:59.651741 containerd[2150]: time="2026-01-17T00:00:59.650314794Z" level=info msg="CreateContainer within sandbox \"aa529ec0d01c17675ef8c55fba123a13aef9b6eb8cab7826fa92ffba1ef1eec3\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"4f08a725e44340bbe4ddbc8ea3ae1fc9c023aa903ef798897e5065d736ee42d6\"" Jan 17 00:00:59.656165 containerd[2150]: time="2026-01-17T00:00:59.655932306Z" level=info msg="StartContainer for \"4f08a725e44340bbe4ddbc8ea3ae1fc9c023aa903ef798897e5065d736ee42d6\"" Jan 17 00:00:59.682469 sshd[4417]: Accepted publickey for core from 68.220.241.50 port 44632 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:00:59.688399 sshd[4417]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:00:59.701173 systemd-logind[2113]: New session 8 of user core. Jan 17 00:00:59.710060 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 17 00:00:59.802612 containerd[2150]: time="2026-01-17T00:00:59.802549279Z" level=info msg="StartContainer for \"4f08a725e44340bbe4ddbc8ea3ae1fc9c023aa903ef798897e5065d736ee42d6\" returns successfully" Jan 17 00:01:00.117008 kubelet[3417]: E0117 00:01:00.116912 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-jjr5r" podUID="7326267c-1eb2-4759-b98f-e8dc2742ecd4" Jan 17 00:01:00.276793 sshd[4417]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:00.288897 systemd[1]: sshd@7-172.31.23.167:22-68.220.241.50:44632.service: Deactivated successfully. Jan 17 00:01:00.294149 systemd-logind[2113]: Session 8 logged out. Waiting for processes to exit. Jan 17 00:01:00.314216 systemd[1]: session-8.scope: Deactivated successfully. Jan 17 00:01:00.320840 systemd-logind[2113]: Removed session 8. Jan 17 00:01:01.232907 containerd[2150]: time="2026-01-17T00:01:01.232491282Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: failed to load CNI config list file /etc/cni/net.d/10-calico.conflist: error parsing configuration list: unexpected end of JSON input: invalid cni config: failed to load cni config" Jan 17 00:01:01.252678 kubelet[3417]: I0117 00:01:01.252561 3417 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 17 00:01:01.297968 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4f08a725e44340bbe4ddbc8ea3ae1fc9c023aa903ef798897e5065d736ee42d6-rootfs.mount: Deactivated successfully. Jan 17 00:01:01.440965 kubelet[3417]: I0117 00:01:01.440793 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/78e9de4b-ea97-4b48-8f59-1242c0c3be02-calico-apiserver-certs\") pod \"calico-apiserver-5fccc8c4dd-wxtgw\" (UID: \"78e9de4b-ea97-4b48-8f59-1242c0c3be02\") " pod="calico-apiserver/calico-apiserver-5fccc8c4dd-wxtgw" Jan 17 00:01:01.441186 kubelet[3417]: I0117 00:01:01.441106 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dlct9\" (UniqueName: \"kubernetes.io/projected/78e9de4b-ea97-4b48-8f59-1242c0c3be02-kube-api-access-dlct9\") pod \"calico-apiserver-5fccc8c4dd-wxtgw\" (UID: \"78e9de4b-ea97-4b48-8f59-1242c0c3be02\") " pod="calico-apiserver/calico-apiserver-5fccc8c4dd-wxtgw" Jan 17 00:01:01.442036 kubelet[3417]: I0117 00:01:01.441288 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nf5m9\" (UniqueName: \"kubernetes.io/projected/d0504424-3111-46f2-be7a-effe09d60f69-kube-api-access-nf5m9\") pod \"coredns-668d6bf9bc-xjcx7\" (UID: \"d0504424-3111-46f2-be7a-effe09d60f69\") " pod="kube-system/coredns-668d6bf9bc-xjcx7" Jan 17 00:01:01.442036 kubelet[3417]: I0117 00:01:01.441344 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/26583054-1df4-4aad-bd58-41f9694f0072-tigera-ca-bundle\") pod \"calico-kube-controllers-74f49dc95d-4gk47\" (UID: \"26583054-1df4-4aad-bd58-41f9694f0072\") " pod="calico-system/calico-kube-controllers-74f49dc95d-4gk47" Jan 17 00:01:01.442036 kubelet[3417]: I0117 00:01:01.441775 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bhnvq\" (UniqueName: \"kubernetes.io/projected/cb54f0b2-682d-402d-a9c8-8c6e24f363be-kube-api-access-bhnvq\") pod \"coredns-668d6bf9bc-4l665\" (UID: \"cb54f0b2-682d-402d-a9c8-8c6e24f363be\") " pod="kube-system/coredns-668d6bf9bc-4l665" Jan 17 00:01:01.443230 kubelet[3417]: I0117 00:01:01.441964 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/d0504424-3111-46f2-be7a-effe09d60f69-config-volume\") pod \"coredns-668d6bf9bc-xjcx7\" (UID: \"d0504424-3111-46f2-be7a-effe09d60f69\") " pod="kube-system/coredns-668d6bf9bc-xjcx7" Jan 17 00:01:01.443230 kubelet[3417]: I0117 00:01:01.442146 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2snwq\" (UniqueName: \"kubernetes.io/projected/26583054-1df4-4aad-bd58-41f9694f0072-kube-api-access-2snwq\") pod \"calico-kube-controllers-74f49dc95d-4gk47\" (UID: \"26583054-1df4-4aad-bd58-41f9694f0072\") " pod="calico-system/calico-kube-controllers-74f49dc95d-4gk47" Jan 17 00:01:01.444514 kubelet[3417]: I0117 00:01:01.442354 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/cb54f0b2-682d-402d-a9c8-8c6e24f363be-config-volume\") pod \"coredns-668d6bf9bc-4l665\" (UID: \"cb54f0b2-682d-402d-a9c8-8c6e24f363be\") " pod="kube-system/coredns-668d6bf9bc-4l665" Jan 17 00:01:01.545323 kubelet[3417]: I0117 00:01:01.544988 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5d919700-9b50-4829-84da-97568c603805-calico-apiserver-certs\") pod \"calico-apiserver-5fccc8c4dd-j7cxw\" (UID: \"5d919700-9b50-4829-84da-97568c603805\") " pod="calico-apiserver/calico-apiserver-5fccc8c4dd-j7cxw" Jan 17 00:01:01.545323 kubelet[3417]: I0117 00:01:01.545102 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c2wrd\" (UniqueName: \"kubernetes.io/projected/e3af7f92-2f69-4868-9102-5ead109a6c2e-kube-api-access-c2wrd\") pod \"whisker-7798df58d7-vkd8q\" (UID: \"e3af7f92-2f69-4868-9102-5ead109a6c2e\") " pod="calico-system/whisker-7798df58d7-vkd8q" Jan 17 00:01:01.545323 kubelet[3417]: I0117 00:01:01.545938 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e6a5f346-af6c-40f3-8c32-a682e7923b77-goldmane-ca-bundle\") pod \"goldmane-666569f655-vmx9m\" (UID: \"e6a5f346-af6c-40f3-8c32-a682e7923b77\") " pod="calico-system/goldmane-666569f655-vmx9m" Jan 17 00:01:01.545323 kubelet[3417]: I0117 00:01:01.546071 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9lmlh\" (UniqueName: \"kubernetes.io/projected/e6a5f346-af6c-40f3-8c32-a682e7923b77-kube-api-access-9lmlh\") pod \"goldmane-666569f655-vmx9m\" (UID: \"e6a5f346-af6c-40f3-8c32-a682e7923b77\") " pod="calico-system/goldmane-666569f655-vmx9m" Jan 17 00:01:01.545323 kubelet[3417]: I0117 00:01:01.546184 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/e6a5f346-af6c-40f3-8c32-a682e7923b77-goldmane-key-pair\") pod \"goldmane-666569f655-vmx9m\" (UID: \"e6a5f346-af6c-40f3-8c32-a682e7923b77\") " pod="calico-system/goldmane-666569f655-vmx9m" Jan 17 00:01:01.549534 kubelet[3417]: I0117 00:01:01.546226 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xkkn2\" (UniqueName: \"kubernetes.io/projected/5d919700-9b50-4829-84da-97568c603805-kube-api-access-xkkn2\") pod \"calico-apiserver-5fccc8c4dd-j7cxw\" (UID: \"5d919700-9b50-4829-84da-97568c603805\") " pod="calico-apiserver/calico-apiserver-5fccc8c4dd-j7cxw" Jan 17 00:01:01.549534 kubelet[3417]: I0117 00:01:01.546265 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e3af7f92-2f69-4868-9102-5ead109a6c2e-whisker-backend-key-pair\") pod \"whisker-7798df58d7-vkd8q\" (UID: \"e3af7f92-2f69-4868-9102-5ead109a6c2e\") " pod="calico-system/whisker-7798df58d7-vkd8q" Jan 17 00:01:01.549534 kubelet[3417]: I0117 00:01:01.546302 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3af7f92-2f69-4868-9102-5ead109a6c2e-whisker-ca-bundle\") pod \"whisker-7798df58d7-vkd8q\" (UID: \"e3af7f92-2f69-4868-9102-5ead109a6c2e\") " pod="calico-system/whisker-7798df58d7-vkd8q" Jan 17 00:01:01.549534 kubelet[3417]: I0117 00:01:01.546411 3417 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e6a5f346-af6c-40f3-8c32-a682e7923b77-config\") pod \"goldmane-666569f655-vmx9m\" (UID: \"e6a5f346-af6c-40f3-8c32-a682e7923b77\") " pod="calico-system/goldmane-666569f655-vmx9m" Jan 17 00:01:01.560723 containerd[2150]: time="2026-01-17T00:01:01.558952171Z" level=info msg="shim disconnected" id=4f08a725e44340bbe4ddbc8ea3ae1fc9c023aa903ef798897e5065d736ee42d6 namespace=k8s.io Jan 17 00:01:01.560723 containerd[2150]: time="2026-01-17T00:01:01.559051975Z" level=warning msg="cleaning up after shim disconnected" id=4f08a725e44340bbe4ddbc8ea3ae1fc9c023aa903ef798897e5065d736ee42d6 namespace=k8s.io Jan 17 00:01:01.560723 containerd[2150]: time="2026-01-17T00:01:01.559076863Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:01:01.701158 containerd[2150]: time="2026-01-17T00:01:01.700464884Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74f49dc95d-4gk47,Uid:26583054-1df4-4aad-bd58-41f9694f0072,Namespace:calico-system,Attempt:0,}" Jan 17 00:01:01.703113 containerd[2150]: time="2026-01-17T00:01:01.702305384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fccc8c4dd-wxtgw,Uid:78e9de4b-ea97-4b48-8f59-1242c0c3be02,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:01:01.708105 containerd[2150]: time="2026-01-17T00:01:01.708031892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4l665,Uid:cb54f0b2-682d-402d-a9c8-8c6e24f363be,Namespace:kube-system,Attempt:0,}" Jan 17 00:01:01.708588 containerd[2150]: time="2026-01-17T00:01:01.708533900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xjcx7,Uid:d0504424-3111-46f2-be7a-effe09d60f69,Namespace:kube-system,Attempt:0,}" Jan 17 00:01:01.763267 containerd[2150]: time="2026-01-17T00:01:01.763171880Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fccc8c4dd-j7cxw,Uid:5d919700-9b50-4829-84da-97568c603805,Namespace:calico-apiserver,Attempt:0,}" Jan 17 00:01:01.768819 containerd[2150]: time="2026-01-17T00:01:01.768762788Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7798df58d7-vkd8q,Uid:e3af7f92-2f69-4868-9102-5ead109a6c2e,Namespace:calico-system,Attempt:0,}" Jan 17 00:01:01.775647 containerd[2150]: time="2026-01-17T00:01:01.775595324Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vmx9m,Uid:e6a5f346-af6c-40f3-8c32-a682e7923b77,Namespace:calico-system,Attempt:0,}" Jan 17 00:01:02.144139 containerd[2150]: time="2026-01-17T00:01:02.143974398Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jjr5r,Uid:7326267c-1eb2-4759-b98f-e8dc2742ecd4,Namespace:calico-system,Attempt:0,}" Jan 17 00:01:02.276531 containerd[2150]: time="2026-01-17T00:01:02.276103531Z" level=error msg="Failed to destroy network for sandbox \"0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.303474 containerd[2150]: time="2026-01-17T00:01:02.298245055Z" level=error msg="encountered an error cleaning up failed sandbox \"0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.303474 containerd[2150]: time="2026-01-17T00:01:02.298342303Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xjcx7,Uid:d0504424-3111-46f2-be7a-effe09d60f69,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.303687 kubelet[3417]: E0117 00:01:02.299601 3417 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.303687 kubelet[3417]: E0117 00:01:02.299696 3417 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xjcx7" Jan 17 00:01:02.303687 kubelet[3417]: E0117 00:01:02.299728 3417 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-xjcx7" Jan 17 00:01:02.304688 kubelet[3417]: E0117 00:01:02.304581 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-xjcx7_kube-system(d0504424-3111-46f2-be7a-effe09d60f69)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-xjcx7_kube-system(d0504424-3111-46f2-be7a-effe09d60f69)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xjcx7" podUID="d0504424-3111-46f2-be7a-effe09d60f69" Jan 17 00:01:02.377262 containerd[2150]: time="2026-01-17T00:01:02.376843699Z" level=error msg="Failed to destroy network for sandbox \"8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.390780 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3-shm.mount: Deactivated successfully. Jan 17 00:01:02.394100 containerd[2150]: time="2026-01-17T00:01:02.394024219Z" level=error msg="encountered an error cleaning up failed sandbox \"8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.395105 containerd[2150]: time="2026-01-17T00:01:02.394577023Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4l665,Uid:cb54f0b2-682d-402d-a9c8-8c6e24f363be,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.396936 kubelet[3417]: E0117 00:01:02.396661 3417 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.396936 kubelet[3417]: E0117 00:01:02.396748 3417 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4l665" Jan 17 00:01:02.396936 kubelet[3417]: E0117 00:01:02.396783 3417 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-4l665" Jan 17 00:01:02.399332 kubelet[3417]: E0117 00:01:02.396866 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-4l665_kube-system(cb54f0b2-682d-402d-a9c8-8c6e24f363be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-4l665_kube-system(cb54f0b2-682d-402d-a9c8-8c6e24f363be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4l665" podUID="cb54f0b2-682d-402d-a9c8-8c6e24f363be" Jan 17 00:01:02.416008 containerd[2150]: time="2026-01-17T00:01:02.415911188Z" level=error msg="Failed to destroy network for sandbox \"1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.424212 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca-shm.mount: Deactivated successfully. Jan 17 00:01:02.426064 containerd[2150]: time="2026-01-17T00:01:02.425789312Z" level=error msg="encountered an error cleaning up failed sandbox \"1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.426616 containerd[2150]: time="2026-01-17T00:01:02.426302288Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74f49dc95d-4gk47,Uid:26583054-1df4-4aad-bd58-41f9694f0072,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.428767 kubelet[3417]: E0117 00:01:02.428227 3417 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.428937 kubelet[3417]: E0117 00:01:02.428819 3417 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74f49dc95d-4gk47" Jan 17 00:01:02.428937 kubelet[3417]: E0117 00:01:02.428861 3417 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-74f49dc95d-4gk47" Jan 17 00:01:02.430646 kubelet[3417]: E0117 00:01:02.430276 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-74f49dc95d-4gk47_calico-system(26583054-1df4-4aad-bd58-41f9694f0072)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-74f49dc95d-4gk47_calico-system(26583054-1df4-4aad-bd58-41f9694f0072)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74f49dc95d-4gk47" podUID="26583054-1df4-4aad-bd58-41f9694f0072" Jan 17 00:01:02.458752 containerd[2150]: time="2026-01-17T00:01:02.458688332Z" level=error msg="Failed to destroy network for sandbox \"0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.462173 containerd[2150]: time="2026-01-17T00:01:02.461615228Z" level=error msg="Failed to destroy network for sandbox \"5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.462173 containerd[2150]: time="2026-01-17T00:01:02.461971928Z" level=error msg="encountered an error cleaning up failed sandbox \"0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.462173 containerd[2150]: time="2026-01-17T00:01:02.462038096Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7798df58d7-vkd8q,Uid:e3af7f92-2f69-4868-9102-5ead109a6c2e,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.463530 kubelet[3417]: E0117 00:01:02.462320 3417 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.463530 kubelet[3417]: E0117 00:01:02.462392 3417 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7798df58d7-vkd8q" Jan 17 00:01:02.463530 kubelet[3417]: E0117 00:01:02.462423 3417 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-7798df58d7-vkd8q" Jan 17 00:01:02.463761 kubelet[3417]: E0117 00:01:02.462548 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-7798df58d7-vkd8q_calico-system(e3af7f92-2f69-4868-9102-5ead109a6c2e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-7798df58d7-vkd8q_calico-system(e3af7f92-2f69-4868-9102-5ead109a6c2e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7798df58d7-vkd8q" podUID="e3af7f92-2f69-4868-9102-5ead109a6c2e" Jan 17 00:01:02.465760 containerd[2150]: time="2026-01-17T00:01:02.462130136Z" level=error msg="encountered an error cleaning up failed sandbox \"5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.465882 containerd[2150]: time="2026-01-17T00:01:02.465796856Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vmx9m,Uid:e6a5f346-af6c-40f3-8c32-a682e7923b77,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.466022 containerd[2150]: time="2026-01-17T00:01:02.465154004Z" level=error msg="Failed to destroy network for sandbox \"fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.470641 kubelet[3417]: E0117 00:01:02.467581 3417 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.470641 kubelet[3417]: E0117 00:01:02.467658 3417 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-vmx9m" Jan 17 00:01:02.470641 kubelet[3417]: E0117 00:01:02.467699 3417 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-vmx9m" Jan 17 00:01:02.473198 containerd[2150]: time="2026-01-17T00:01:02.470525084Z" level=error msg="encountered an error cleaning up failed sandbox \"fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.473198 containerd[2150]: time="2026-01-17T00:01:02.470621492Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fccc8c4dd-wxtgw,Uid:78e9de4b-ea97-4b48-8f59-1242c0c3be02,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.469358 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0-shm.mount: Deactivated successfully. Jan 17 00:01:02.473529 kubelet[3417]: E0117 00:01:02.470535 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-vmx9m_calico-system(e6a5f346-af6c-40f3-8c32-a682e7923b77)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-vmx9m_calico-system(e6a5f346-af6c-40f3-8c32-a682e7923b77)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-vmx9m" podUID="e6a5f346-af6c-40f3-8c32-a682e7923b77" Jan 17 00:01:02.479317 kubelet[3417]: E0117 00:01:02.477900 3417 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.479317 kubelet[3417]: E0117 00:01:02.477981 3417 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fccc8c4dd-wxtgw" Jan 17 00:01:02.479317 kubelet[3417]: E0117 00:01:02.478015 3417 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fccc8c4dd-wxtgw" Jan 17 00:01:02.479812 kubelet[3417]: E0117 00:01:02.478074 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fccc8c4dd-wxtgw_calico-apiserver(78e9de4b-ea97-4b48-8f59-1242c0c3be02)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fccc8c4dd-wxtgw_calico-apiserver(78e9de4b-ea97-4b48-8f59-1242c0c3be02)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fccc8c4dd-wxtgw" podUID="78e9de4b-ea97-4b48-8f59-1242c0c3be02" Jan 17 00:01:02.484172 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138-shm.mount: Deactivated successfully. Jan 17 00:01:02.485405 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7-shm.mount: Deactivated successfully. Jan 17 00:01:02.488139 kubelet[3417]: I0117 00:01:02.487780 3417 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" Jan 17 00:01:02.495200 containerd[2150]: time="2026-01-17T00:01:02.494679404Z" level=info msg="StopPodSandbox for \"8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3\"" Jan 17 00:01:02.495200 containerd[2150]: time="2026-01-17T00:01:02.494969564Z" level=info msg="Ensure that sandbox 8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3 in task-service has been cleanup successfully" Jan 17 00:01:02.500484 kubelet[3417]: I0117 00:01:02.499768 3417 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" Jan 17 00:01:02.502407 containerd[2150]: time="2026-01-17T00:01:02.501346148Z" level=info msg="StopPodSandbox for \"fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7\"" Jan 17 00:01:02.502407 containerd[2150]: time="2026-01-17T00:01:02.501641768Z" level=info msg="Ensure that sandbox fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7 in task-service has been cleanup successfully" Jan 17 00:01:02.538252 containerd[2150]: time="2026-01-17T00:01:02.538179752Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 17 00:01:02.550578 containerd[2150]: time="2026-01-17T00:01:02.549260252Z" level=error msg="Failed to destroy network for sandbox \"61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.550730 kubelet[3417]: I0117 00:01:02.550630 3417 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" Jan 17 00:01:02.554115 containerd[2150]: time="2026-01-17T00:01:02.554044328Z" level=error msg="encountered an error cleaning up failed sandbox \"61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.554909 containerd[2150]: time="2026-01-17T00:01:02.554340728Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fccc8c4dd-j7cxw,Uid:5d919700-9b50-4829-84da-97568c603805,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.568955 containerd[2150]: time="2026-01-17T00:01:02.559617056Z" level=info msg="StopPodSandbox for \"0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60\"" Jan 17 00:01:02.568955 containerd[2150]: time="2026-01-17T00:01:02.568529816Z" level=info msg="Ensure that sandbox 0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60 in task-service has been cleanup successfully" Jan 17 00:01:02.572915 kubelet[3417]: E0117 00:01:02.572852 3417 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.573185 kubelet[3417]: E0117 00:01:02.573127 3417 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fccc8c4dd-j7cxw" Jan 17 00:01:02.573399 kubelet[3417]: E0117 00:01:02.573351 3417 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5fccc8c4dd-j7cxw" Jan 17 00:01:02.573673 kubelet[3417]: E0117 00:01:02.573622 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5fccc8c4dd-j7cxw_calico-apiserver(5d919700-9b50-4829-84da-97568c603805)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5fccc8c4dd-j7cxw_calico-apiserver(5d919700-9b50-4829-84da-97568c603805)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fccc8c4dd-j7cxw" podUID="5d919700-9b50-4829-84da-97568c603805" Jan 17 00:01:02.577261 kubelet[3417]: I0117 00:01:02.576970 3417 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" Jan 17 00:01:02.601881 containerd[2150]: time="2026-01-17T00:01:02.601823924Z" level=info msg="StopPodSandbox for \"1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca\"" Jan 17 00:01:02.605103 containerd[2150]: time="2026-01-17T00:01:02.604666652Z" level=info msg="Ensure that sandbox 1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca in task-service has been cleanup successfully" Jan 17 00:01:02.636374 kubelet[3417]: I0117 00:01:02.636331 3417 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" Jan 17 00:01:02.641073 containerd[2150]: time="2026-01-17T00:01:02.641020773Z" level=info msg="StopPodSandbox for \"5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138\"" Jan 17 00:01:02.643692 containerd[2150]: time="2026-01-17T00:01:02.643293237Z" level=info msg="Ensure that sandbox 5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138 in task-service has been cleanup successfully" Jan 17 00:01:02.668552 kubelet[3417]: I0117 00:01:02.666340 3417 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" Jan 17 00:01:02.669474 containerd[2150]: time="2026-01-17T00:01:02.668999445Z" level=info msg="StopPodSandbox for \"0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0\"" Jan 17 00:01:02.671033 containerd[2150]: time="2026-01-17T00:01:02.670979493Z" level=info msg="Ensure that sandbox 0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0 in task-service has been cleanup successfully" Jan 17 00:01:02.726999 containerd[2150]: time="2026-01-17T00:01:02.726920877Z" level=error msg="StopPodSandbox for \"fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7\" failed" error="failed to destroy network for sandbox \"fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.727783 kubelet[3417]: E0117 00:01:02.727370 3417 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" Jan 17 00:01:02.727783 kubelet[3417]: E0117 00:01:02.727521 3417 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7"} Jan 17 00:01:02.727783 kubelet[3417]: E0117 00:01:02.727667 3417 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"78e9de4b-ea97-4b48-8f59-1242c0c3be02\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:01:02.727783 kubelet[3417]: E0117 00:01:02.727733 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"78e9de4b-ea97-4b48-8f59-1242c0c3be02\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fccc8c4dd-wxtgw" podUID="78e9de4b-ea97-4b48-8f59-1242c0c3be02" Jan 17 00:01:02.746519 containerd[2150]: time="2026-01-17T00:01:02.746134461Z" level=error msg="StopPodSandbox for \"0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60\" failed" error="failed to destroy network for sandbox \"0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.748415 kubelet[3417]: E0117 00:01:02.747894 3417 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" Jan 17 00:01:02.748415 kubelet[3417]: E0117 00:01:02.747964 3417 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60"} Jan 17 00:01:02.748415 kubelet[3417]: E0117 00:01:02.748018 3417 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"d0504424-3111-46f2-be7a-effe09d60f69\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:01:02.748415 kubelet[3417]: E0117 00:01:02.748059 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"d0504424-3111-46f2-be7a-effe09d60f69\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-xjcx7" podUID="d0504424-3111-46f2-be7a-effe09d60f69" Jan 17 00:01:02.755312 containerd[2150]: time="2026-01-17T00:01:02.755243289Z" level=error msg="StopPodSandbox for \"1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca\" failed" error="failed to destroy network for sandbox \"1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.756291 containerd[2150]: time="2026-01-17T00:01:02.755655729Z" level=error msg="StopPodSandbox for \"8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3\" failed" error="failed to destroy network for sandbox \"8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.756430 kubelet[3417]: E0117 00:01:02.755905 3417 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" Jan 17 00:01:02.756430 kubelet[3417]: E0117 00:01:02.755974 3417 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3"} Jan 17 00:01:02.756430 kubelet[3417]: E0117 00:01:02.756029 3417 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"cb54f0b2-682d-402d-a9c8-8c6e24f363be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:01:02.756430 kubelet[3417]: E0117 00:01:02.756067 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"cb54f0b2-682d-402d-a9c8-8c6e24f363be\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-4l665" podUID="cb54f0b2-682d-402d-a9c8-8c6e24f363be" Jan 17 00:01:02.756816 kubelet[3417]: E0117 00:01:02.756120 3417 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" Jan 17 00:01:02.756816 kubelet[3417]: E0117 00:01:02.756155 3417 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca"} Jan 17 00:01:02.756816 kubelet[3417]: E0117 00:01:02.756192 3417 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"26583054-1df4-4aad-bd58-41f9694f0072\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:01:02.756816 kubelet[3417]: E0117 00:01:02.756227 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"26583054-1df4-4aad-bd58-41f9694f0072\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-74f49dc95d-4gk47" podUID="26583054-1df4-4aad-bd58-41f9694f0072" Jan 17 00:01:02.799008 containerd[2150]: time="2026-01-17T00:01:02.796902201Z" level=error msg="StopPodSandbox for \"0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0\" failed" error="failed to destroy network for sandbox \"0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.799141 kubelet[3417]: E0117 00:01:02.797302 3417 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" Jan 17 00:01:02.799141 kubelet[3417]: E0117 00:01:02.797368 3417 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0"} Jan 17 00:01:02.799141 kubelet[3417]: E0117 00:01:02.797422 3417 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e3af7f92-2f69-4868-9102-5ead109a6c2e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:01:02.799141 kubelet[3417]: E0117 00:01:02.797540 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e3af7f92-2f69-4868-9102-5ead109a6c2e\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-7798df58d7-vkd8q" podUID="e3af7f92-2f69-4868-9102-5ead109a6c2e" Jan 17 00:01:02.800755 containerd[2150]: time="2026-01-17T00:01:02.800662077Z" level=error msg="Failed to destroy network for sandbox \"15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.803026 containerd[2150]: time="2026-01-17T00:01:02.802832985Z" level=error msg="encountered an error cleaning up failed sandbox \"15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.803216 containerd[2150]: time="2026-01-17T00:01:02.803160741Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jjr5r,Uid:7326267c-1eb2-4759-b98f-e8dc2742ecd4,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.803837 kubelet[3417]: E0117 00:01:02.803788 3417 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.804185 kubelet[3417]: E0117 00:01:02.804125 3417 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jjr5r" Jan 17 00:01:02.804370 kubelet[3417]: E0117 00:01:02.804340 3417 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-jjr5r" Jan 17 00:01:02.805957 kubelet[3417]: E0117 00:01:02.805108 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-jjr5r_calico-system(7326267c-1eb2-4759-b98f-e8dc2742ecd4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-jjr5r_calico-system(7326267c-1eb2-4759-b98f-e8dc2742ecd4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jjr5r" podUID="7326267c-1eb2-4759-b98f-e8dc2742ecd4" Jan 17 00:01:02.817404 containerd[2150]: time="2026-01-17T00:01:02.817284850Z" level=error msg="StopPodSandbox for \"5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138\" failed" error="failed to destroy network for sandbox \"5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:02.817982 kubelet[3417]: E0117 00:01:02.817761 3417 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" Jan 17 00:01:02.817982 kubelet[3417]: E0117 00:01:02.817834 3417 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138"} Jan 17 00:01:02.817982 kubelet[3417]: E0117 00:01:02.817891 3417 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e6a5f346-af6c-40f3-8c32-a682e7923b77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:01:02.817982 kubelet[3417]: E0117 00:01:02.817929 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e6a5f346-af6c-40f3-8c32-a682e7923b77\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-vmx9m" podUID="e6a5f346-af6c-40f3-8c32-a682e7923b77" Jan 17 00:01:03.287693 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027-shm.mount: Deactivated successfully. Jan 17 00:01:03.287966 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4-shm.mount: Deactivated successfully. Jan 17 00:01:03.672185 kubelet[3417]: I0117 00:01:03.670477 3417 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" Jan 17 00:01:03.673949 containerd[2150]: time="2026-01-17T00:01:03.673410322Z" level=info msg="StopPodSandbox for \"61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4\"" Jan 17 00:01:03.675919 containerd[2150]: time="2026-01-17T00:01:03.675295882Z" level=info msg="Ensure that sandbox 61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4 in task-service has been cleanup successfully" Jan 17 00:01:03.678012 kubelet[3417]: I0117 00:01:03.677958 3417 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" Jan 17 00:01:03.682728 containerd[2150]: time="2026-01-17T00:01:03.682635454Z" level=info msg="StopPodSandbox for \"15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027\"" Jan 17 00:01:03.683705 containerd[2150]: time="2026-01-17T00:01:03.683231530Z" level=info msg="Ensure that sandbox 15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027 in task-service has been cleanup successfully" Jan 17 00:01:03.741962 containerd[2150]: time="2026-01-17T00:01:03.741689878Z" level=error msg="StopPodSandbox for \"61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4\" failed" error="failed to destroy network for sandbox \"61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:03.742187 kubelet[3417]: E0117 00:01:03.742104 3417 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" Jan 17 00:01:03.742262 kubelet[3417]: E0117 00:01:03.742177 3417 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4"} Jan 17 00:01:03.742262 kubelet[3417]: E0117 00:01:03.742236 3417 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"5d919700-9b50-4829-84da-97568c603805\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:01:03.742419 kubelet[3417]: E0117 00:01:03.742276 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"5d919700-9b50-4829-84da-97568c603805\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5fccc8c4dd-j7cxw" podUID="5d919700-9b50-4829-84da-97568c603805" Jan 17 00:01:03.749968 containerd[2150]: time="2026-01-17T00:01:03.749664838Z" level=error msg="StopPodSandbox for \"15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027\" failed" error="failed to destroy network for sandbox \"15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 17 00:01:03.750129 kubelet[3417]: E0117 00:01:03.749985 3417 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" Jan 17 00:01:03.750129 kubelet[3417]: E0117 00:01:03.750050 3417 kuberuntime_manager.go:1546] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027"} Jan 17 00:01:03.750129 kubelet[3417]: E0117 00:01:03.750105 3417 kuberuntime_manager.go:1146] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"7326267c-1eb2-4759-b98f-e8dc2742ecd4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 17 00:01:03.750347 kubelet[3417]: E0117 00:01:03.750146 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"7326267c-1eb2-4759-b98f-e8dc2742ecd4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-jjr5r" podUID="7326267c-1eb2-4759-b98f-e8dc2742ecd4" Jan 17 00:01:05.369950 systemd[1]: Started sshd@8-172.31.23.167:22-68.220.241.50:47960.service - OpenSSH per-connection server daemon (68.220.241.50:47960). Jan 17 00:01:05.916358 sshd[4834]: Accepted publickey for core from 68.220.241.50 port 47960 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:05.918966 sshd[4834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:05.927588 systemd-logind[2113]: New session 9 of user core. Jan 17 00:01:05.931893 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 17 00:01:06.442784 sshd[4834]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:06.451608 systemd[1]: sshd@8-172.31.23.167:22-68.220.241.50:47960.service: Deactivated successfully. Jan 17 00:01:06.458584 systemd-logind[2113]: Session 9 logged out. Waiting for processes to exit. Jan 17 00:01:06.459059 systemd[1]: session-9.scope: Deactivated successfully. Jan 17 00:01:06.463533 systemd-logind[2113]: Removed session 9. Jan 17 00:01:11.535970 systemd[1]: Started sshd@9-172.31.23.167:22-68.220.241.50:47970.service - OpenSSH per-connection server daemon (68.220.241.50:47970). Jan 17 00:01:12.125475 sshd[4855]: Accepted publickey for core from 68.220.241.50 port 47970 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:12.127292 sshd[4855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:12.141419 systemd-logind[2113]: New session 10 of user core. Jan 17 00:01:12.147097 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 17 00:01:12.568631 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount767565143.mount: Deactivated successfully. Jan 17 00:01:12.641486 containerd[2150]: time="2026-01-17T00:01:12.640626450Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:12.642074 containerd[2150]: time="2026-01-17T00:01:12.641877726Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 17 00:01:12.642961 containerd[2150]: time="2026-01-17T00:01:12.642907674Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:12.649378 containerd[2150]: time="2026-01-17T00:01:12.649308750Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 17 00:01:12.651050 containerd[2150]: time="2026-01-17T00:01:12.650987898Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 10.112732234s" Jan 17 00:01:12.651195 containerd[2150]: time="2026-01-17T00:01:12.651050838Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 17 00:01:12.683059 containerd[2150]: time="2026-01-17T00:01:12.682817923Z" level=info msg="CreateContainer within sandbox \"aa529ec0d01c17675ef8c55fba123a13aef9b6eb8cab7826fa92ffba1ef1eec3\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 17 00:01:12.711456 containerd[2150]: time="2026-01-17T00:01:12.710391127Z" level=info msg="CreateContainer within sandbox \"aa529ec0d01c17675ef8c55fba123a13aef9b6eb8cab7826fa92ffba1ef1eec3\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"943280e81465e85041c0189420af12f7c57a2b89026411fb948be11abb08d59e\"" Jan 17 00:01:12.714190 sshd[4855]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:12.717877 containerd[2150]: time="2026-01-17T00:01:12.714864043Z" level=info msg="StartContainer for \"943280e81465e85041c0189420af12f7c57a2b89026411fb948be11abb08d59e\"" Jan 17 00:01:12.737963 systemd-logind[2113]: Session 10 logged out. Waiting for processes to exit. Jan 17 00:01:12.740589 systemd[1]: sshd@9-172.31.23.167:22-68.220.241.50:47970.service: Deactivated successfully. Jan 17 00:01:12.749108 systemd[1]: session-10.scope: Deactivated successfully. Jan 17 00:01:12.752334 systemd-logind[2113]: Removed session 10. Jan 17 00:01:12.840202 containerd[2150]: time="2026-01-17T00:01:12.838704103Z" level=info msg="StartContainer for \"943280e81465e85041c0189420af12f7c57a2b89026411fb948be11abb08d59e\" returns successfully" Jan 17 00:01:13.212301 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 17 00:01:13.212508 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 17 00:01:13.471013 containerd[2150]: time="2026-01-17T00:01:13.470854398Z" level=info msg="StopPodSandbox for \"0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0\"" Jan 17 00:01:14.029740 kubelet[3417]: I0117 00:01:14.029271 3417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-ckhl9" podStartSLOduration=4.328810582 podStartE2EDuration="38.029232833s" podCreationTimestamp="2026-01-17 00:00:36 +0000 UTC" firstStartedPulling="2026-01-17 00:00:38.952953275 +0000 UTC m=+36.081963360" lastFinishedPulling="2026-01-17 00:01:12.653375538 +0000 UTC m=+69.782385611" observedRunningTime="2026-01-17 00:01:13.810944672 +0000 UTC m=+70.939954781" watchObservedRunningTime="2026-01-17 00:01:14.029232833 +0000 UTC m=+71.158242966" Jan 17 00:01:14.124624 containerd[2150]: time="2026-01-17T00:01:14.122807478Z" level=info msg="StopPodSandbox for \"fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7\"" Jan 17 00:01:14.126224 containerd[2150]: time="2026-01-17T00:01:14.125030166Z" level=info msg="StopPodSandbox for \"0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60\"" Jan 17 00:01:14.240009 containerd[2150]: 2026-01-17 00:01:14.026 [INFO][4931] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" Jan 17 00:01:14.240009 containerd[2150]: 2026-01-17 00:01:14.028 [INFO][4931] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" iface="eth0" netns="/var/run/netns/cni-c5dbc9f9-a0ff-5b91-218f-851055e831e1" Jan 17 00:01:14.240009 containerd[2150]: 2026-01-17 00:01:14.031 [INFO][4931] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" iface="eth0" netns="/var/run/netns/cni-c5dbc9f9-a0ff-5b91-218f-851055e831e1" Jan 17 00:01:14.240009 containerd[2150]: 2026-01-17 00:01:14.032 [INFO][4931] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" iface="eth0" netns="/var/run/netns/cni-c5dbc9f9-a0ff-5b91-218f-851055e831e1" Jan 17 00:01:14.240009 containerd[2150]: 2026-01-17 00:01:14.033 [INFO][4931] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" Jan 17 00:01:14.240009 containerd[2150]: 2026-01-17 00:01:14.033 [INFO][4931] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" Jan 17 00:01:14.240009 containerd[2150]: 2026-01-17 00:01:14.182 [INFO][4968] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" HandleID="k8s-pod-network.0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" Workload="ip--172--31--23--167-k8s-whisker--7798df58d7--vkd8q-eth0" Jan 17 00:01:14.240009 containerd[2150]: 2026-01-17 00:01:14.182 [INFO][4968] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:01:14.240009 containerd[2150]: 2026-01-17 00:01:14.182 [INFO][4968] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:01:14.240009 containerd[2150]: 2026-01-17 00:01:14.203 [WARNING][4968] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" HandleID="k8s-pod-network.0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" Workload="ip--172--31--23--167-k8s-whisker--7798df58d7--vkd8q-eth0" Jan 17 00:01:14.240009 containerd[2150]: 2026-01-17 00:01:14.204 [INFO][4968] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" HandleID="k8s-pod-network.0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" Workload="ip--172--31--23--167-k8s-whisker--7798df58d7--vkd8q-eth0" Jan 17 00:01:14.240009 containerd[2150]: 2026-01-17 00:01:14.207 [INFO][4968] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:01:14.240009 containerd[2150]: 2026-01-17 00:01:14.230 [INFO][4931] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" Jan 17 00:01:14.244360 containerd[2150]: time="2026-01-17T00:01:14.243934110Z" level=info msg="TearDown network for sandbox \"0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0\" successfully" Jan 17 00:01:14.244360 containerd[2150]: time="2026-01-17T00:01:14.244014582Z" level=info msg="StopPodSandbox for \"0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0\" returns successfully" Jan 17 00:01:14.249765 systemd[1]: run-netns-cni\x2dc5dbc9f9\x2da0ff\x2d5b91\x2d218f\x2d851055e831e1.mount: Deactivated successfully. Jan 17 00:01:14.354813 containerd[2150]: 2026-01-17 00:01:14.275 [INFO][4999] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" Jan 17 00:01:14.354813 containerd[2150]: 2026-01-17 00:01:14.275 [INFO][4999] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" iface="eth0" netns="/var/run/netns/cni-fdf286ca-7967-fa9f-f60f-ac655270febb" Jan 17 00:01:14.354813 containerd[2150]: 2026-01-17 00:01:14.277 [INFO][4999] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" iface="eth0" netns="/var/run/netns/cni-fdf286ca-7967-fa9f-f60f-ac655270febb" Jan 17 00:01:14.354813 containerd[2150]: 2026-01-17 00:01:14.277 [INFO][4999] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" iface="eth0" netns="/var/run/netns/cni-fdf286ca-7967-fa9f-f60f-ac655270febb" Jan 17 00:01:14.354813 containerd[2150]: 2026-01-17 00:01:14.277 [INFO][4999] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" Jan 17 00:01:14.354813 containerd[2150]: 2026-01-17 00:01:14.277 [INFO][4999] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" Jan 17 00:01:14.354813 containerd[2150]: 2026-01-17 00:01:14.328 [INFO][5018] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" HandleID="k8s-pod-network.0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" Workload="ip--172--31--23--167-k8s-coredns--668d6bf9bc--xjcx7-eth0" Jan 17 00:01:14.354813 containerd[2150]: 2026-01-17 00:01:14.329 [INFO][5018] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:01:14.354813 containerd[2150]: 2026-01-17 00:01:14.329 [INFO][5018] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:01:14.354813 containerd[2150]: 2026-01-17 00:01:14.344 [WARNING][5018] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" HandleID="k8s-pod-network.0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" Workload="ip--172--31--23--167-k8s-coredns--668d6bf9bc--xjcx7-eth0" Jan 17 00:01:14.354813 containerd[2150]: 2026-01-17 00:01:14.345 [INFO][5018] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" HandleID="k8s-pod-network.0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" Workload="ip--172--31--23--167-k8s-coredns--668d6bf9bc--xjcx7-eth0" Jan 17 00:01:14.354813 containerd[2150]: 2026-01-17 00:01:14.347 [INFO][5018] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:01:14.354813 containerd[2150]: 2026-01-17 00:01:14.351 [INFO][4999] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" Jan 17 00:01:14.358310 containerd[2150]: time="2026-01-17T00:01:14.355623727Z" level=info msg="TearDown network for sandbox \"0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60\" successfully" Jan 17 00:01:14.358310 containerd[2150]: time="2026-01-17T00:01:14.355665115Z" level=info msg="StopPodSandbox for \"0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60\" returns successfully" Jan 17 00:01:14.359969 containerd[2150]: time="2026-01-17T00:01:14.358558219Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xjcx7,Uid:d0504424-3111-46f2-be7a-effe09d60f69,Namespace:kube-system,Attempt:1,}" Jan 17 00:01:14.365935 systemd[1]: run-netns-cni\x2dfdf286ca\x2d7967\x2dfa9f\x2df60f\x2dac655270febb.mount: Deactivated successfully. Jan 17 00:01:14.370184 kubelet[3417]: I0117 00:01:14.367390 3417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-c2wrd\" (UniqueName: \"kubernetes.io/projected/e3af7f92-2f69-4868-9102-5ead109a6c2e-kube-api-access-c2wrd\") pod \"e3af7f92-2f69-4868-9102-5ead109a6c2e\" (UID: \"e3af7f92-2f69-4868-9102-5ead109a6c2e\") " Jan 17 00:01:14.370184 kubelet[3417]: I0117 00:01:14.367506 3417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3af7f92-2f69-4868-9102-5ead109a6c2e-whisker-ca-bundle\") pod \"e3af7f92-2f69-4868-9102-5ead109a6c2e\" (UID: \"e3af7f92-2f69-4868-9102-5ead109a6c2e\") " Jan 17 00:01:14.370184 kubelet[3417]: I0117 00:01:14.368292 3417 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e3af7f92-2f69-4868-9102-5ead109a6c2e-whisker-backend-key-pair\") pod \"e3af7f92-2f69-4868-9102-5ead109a6c2e\" (UID: \"e3af7f92-2f69-4868-9102-5ead109a6c2e\") " Jan 17 00:01:14.374689 kubelet[3417]: I0117 00:01:14.374619 3417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e3af7f92-2f69-4868-9102-5ead109a6c2e-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "e3af7f92-2f69-4868-9102-5ead109a6c2e" (UID: "e3af7f92-2f69-4868-9102-5ead109a6c2e"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 17 00:01:14.388309 systemd[1]: var-lib-kubelet-pods-e3af7f92\x2d2f69\x2d4868\x2d9102\x2d5ead109a6c2e-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 17 00:01:14.391924 kubelet[3417]: I0117 00:01:14.391803 3417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e3af7f92-2f69-4868-9102-5ead109a6c2e-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "e3af7f92-2f69-4868-9102-5ead109a6c2e" (UID: "e3af7f92-2f69-4868-9102-5ead109a6c2e"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 17 00:01:14.399745 kubelet[3417]: I0117 00:01:14.399662 3417 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e3af7f92-2f69-4868-9102-5ead109a6c2e-kube-api-access-c2wrd" (OuterVolumeSpecName: "kube-api-access-c2wrd") pod "e3af7f92-2f69-4868-9102-5ead109a6c2e" (UID: "e3af7f92-2f69-4868-9102-5ead109a6c2e"). InnerVolumeSpecName "kube-api-access-c2wrd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 17 00:01:14.405524 containerd[2150]: 2026-01-17 00:01:14.297 [INFO][5007] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" Jan 17 00:01:14.405524 containerd[2150]: 2026-01-17 00:01:14.297 [INFO][5007] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" iface="eth0" netns="/var/run/netns/cni-b034d6b1-41d3-8001-963a-76d06f3921cf" Jan 17 00:01:14.405524 containerd[2150]: 2026-01-17 00:01:14.298 [INFO][5007] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" iface="eth0" netns="/var/run/netns/cni-b034d6b1-41d3-8001-963a-76d06f3921cf" Jan 17 00:01:14.405524 containerd[2150]: 2026-01-17 00:01:14.299 [INFO][5007] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" iface="eth0" netns="/var/run/netns/cni-b034d6b1-41d3-8001-963a-76d06f3921cf" Jan 17 00:01:14.405524 containerd[2150]: 2026-01-17 00:01:14.299 [INFO][5007] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" Jan 17 00:01:14.405524 containerd[2150]: 2026-01-17 00:01:14.299 [INFO][5007] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" Jan 17 00:01:14.405524 containerd[2150]: 2026-01-17 00:01:14.345 [INFO][5023] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" HandleID="k8s-pod-network.fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" Workload="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--wxtgw-eth0" Jan 17 00:01:14.405524 containerd[2150]: 2026-01-17 00:01:14.345 [INFO][5023] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:01:14.405524 containerd[2150]: 2026-01-17 00:01:14.347 [INFO][5023] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:01:14.405524 containerd[2150]: 2026-01-17 00:01:14.376 [WARNING][5023] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" HandleID="k8s-pod-network.fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" Workload="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--wxtgw-eth0" Jan 17 00:01:14.405524 containerd[2150]: 2026-01-17 00:01:14.376 [INFO][5023] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" HandleID="k8s-pod-network.fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" Workload="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--wxtgw-eth0" Jan 17 00:01:14.405524 containerd[2150]: 2026-01-17 00:01:14.383 [INFO][5023] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:01:14.405524 containerd[2150]: 2026-01-17 00:01:14.396 [INFO][5007] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" Jan 17 00:01:14.405524 containerd[2150]: time="2026-01-17T00:01:14.405301243Z" level=info msg="TearDown network for sandbox \"fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7\" successfully" Jan 17 00:01:14.405524 containerd[2150]: time="2026-01-17T00:01:14.405338083Z" level=info msg="StopPodSandbox for \"fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7\" returns successfully" Jan 17 00:01:14.407345 containerd[2150]: time="2026-01-17T00:01:14.407053027Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fccc8c4dd-wxtgw,Uid:78e9de4b-ea97-4b48-8f59-1242c0c3be02,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:01:14.469787 kubelet[3417]: I0117 00:01:14.469714 3417 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-c2wrd\" (UniqueName: \"kubernetes.io/projected/e3af7f92-2f69-4868-9102-5ead109a6c2e-kube-api-access-c2wrd\") on node \"ip-172-31-23-167\" DevicePath \"\"" Jan 17 00:01:14.469787 kubelet[3417]: I0117 00:01:14.469777 3417 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e3af7f92-2f69-4868-9102-5ead109a6c2e-whisker-ca-bundle\") on node \"ip-172-31-23-167\" DevicePath \"\"" Jan 17 00:01:14.470007 kubelet[3417]: I0117 00:01:14.469805 3417 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/e3af7f92-2f69-4868-9102-5ead109a6c2e-whisker-backend-key-pair\") on node \"ip-172-31-23-167\" DevicePath \"\"" Jan 17 00:01:14.578364 systemd[1]: run-netns-cni\x2db034d6b1\x2d41d3\x2d8001\x2d963a\x2d76d06f3921cf.mount: Deactivated successfully. Jan 17 00:01:14.578693 systemd[1]: var-lib-kubelet-pods-e3af7f92\x2d2f69\x2d4868\x2d9102\x2d5ead109a6c2e-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dc2wrd.mount: Deactivated successfully. Jan 17 00:01:14.666585 (udev-worker)[4919]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:01:14.671739 systemd-networkd[1691]: cali2c5a7be5d7d: Link UP Jan 17 00:01:14.674867 systemd-networkd[1691]: cali2c5a7be5d7d: Gained carrier Jan 17 00:01:14.710515 containerd[2150]: 2026-01-17 00:01:14.458 [INFO][5034] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 00:01:14.710515 containerd[2150]: 2026-01-17 00:01:14.485 [INFO][5034] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--167-k8s-coredns--668d6bf9bc--xjcx7-eth0 coredns-668d6bf9bc- kube-system d0504424-3111-46f2-be7a-effe09d60f69 1011 0 2026-01-17 00:00:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-167 coredns-668d6bf9bc-xjcx7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2c5a7be5d7d [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="cb58af2090b2de8d4f22a131ee88c7b5c22008ae0506bd8e4f971d5868f45779" Namespace="kube-system" Pod="coredns-668d6bf9bc-xjcx7" WorkloadEndpoint="ip--172--31--23--167-k8s-coredns--668d6bf9bc--xjcx7-" Jan 17 00:01:14.710515 containerd[2150]: 2026-01-17 00:01:14.485 [INFO][5034] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="cb58af2090b2de8d4f22a131ee88c7b5c22008ae0506bd8e4f971d5868f45779" Namespace="kube-system" Pod="coredns-668d6bf9bc-xjcx7" WorkloadEndpoint="ip--172--31--23--167-k8s-coredns--668d6bf9bc--xjcx7-eth0" Jan 17 00:01:14.710515 containerd[2150]: 2026-01-17 00:01:14.556 [INFO][5055] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="cb58af2090b2de8d4f22a131ee88c7b5c22008ae0506bd8e4f971d5868f45779" HandleID="k8s-pod-network.cb58af2090b2de8d4f22a131ee88c7b5c22008ae0506bd8e4f971d5868f45779" Workload="ip--172--31--23--167-k8s-coredns--668d6bf9bc--xjcx7-eth0" Jan 17 00:01:14.710515 containerd[2150]: 2026-01-17 00:01:14.557 [INFO][5055] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="cb58af2090b2de8d4f22a131ee88c7b5c22008ae0506bd8e4f971d5868f45779" HandleID="k8s-pod-network.cb58af2090b2de8d4f22a131ee88c7b5c22008ae0506bd8e4f971d5868f45779" Workload="ip--172--31--23--167-k8s-coredns--668d6bf9bc--xjcx7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3b60), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-167", "pod":"coredns-668d6bf9bc-xjcx7", "timestamp":"2026-01-17 00:01:14.55697678 +0000 UTC"}, Hostname:"ip-172-31-23-167", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:01:14.710515 containerd[2150]: 2026-01-17 00:01:14.557 [INFO][5055] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:01:14.710515 containerd[2150]: 2026-01-17 00:01:14.557 [INFO][5055] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:01:14.710515 containerd[2150]: 2026-01-17 00:01:14.557 [INFO][5055] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-167' Jan 17 00:01:14.710515 containerd[2150]: 2026-01-17 00:01:14.590 [INFO][5055] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.cb58af2090b2de8d4f22a131ee88c7b5c22008ae0506bd8e4f971d5868f45779" host="ip-172-31-23-167" Jan 17 00:01:14.710515 containerd[2150]: 2026-01-17 00:01:14.601 [INFO][5055] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-167" Jan 17 00:01:14.710515 containerd[2150]: 2026-01-17 00:01:14.608 [INFO][5055] ipam/ipam.go 511: Trying affinity for 192.168.38.64/26 host="ip-172-31-23-167" Jan 17 00:01:14.710515 containerd[2150]: 2026-01-17 00:01:14.613 [INFO][5055] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.64/26 host="ip-172-31-23-167" Jan 17 00:01:14.710515 containerd[2150]: 2026-01-17 00:01:14.617 [INFO][5055] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.64/26 host="ip-172-31-23-167" Jan 17 00:01:14.710515 containerd[2150]: 2026-01-17 00:01:14.617 [INFO][5055] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.64/26 handle="k8s-pod-network.cb58af2090b2de8d4f22a131ee88c7b5c22008ae0506bd8e4f971d5868f45779" host="ip-172-31-23-167" Jan 17 00:01:14.710515 containerd[2150]: 2026-01-17 00:01:14.620 [INFO][5055] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.cb58af2090b2de8d4f22a131ee88c7b5c22008ae0506bd8e4f971d5868f45779 Jan 17 00:01:14.710515 containerd[2150]: 2026-01-17 00:01:14.629 [INFO][5055] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.64/26 handle="k8s-pod-network.cb58af2090b2de8d4f22a131ee88c7b5c22008ae0506bd8e4f971d5868f45779" host="ip-172-31-23-167" Jan 17 00:01:14.710515 containerd[2150]: 2026-01-17 00:01:14.643 [INFO][5055] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.65/26] block=192.168.38.64/26 handle="k8s-pod-network.cb58af2090b2de8d4f22a131ee88c7b5c22008ae0506bd8e4f971d5868f45779" host="ip-172-31-23-167" Jan 17 00:01:14.710515 containerd[2150]: 2026-01-17 00:01:14.643 [INFO][5055] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.65/26] handle="k8s-pod-network.cb58af2090b2de8d4f22a131ee88c7b5c22008ae0506bd8e4f971d5868f45779" host="ip-172-31-23-167" Jan 17 00:01:14.710515 containerd[2150]: 2026-01-17 00:01:14.644 [INFO][5055] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:01:14.710515 containerd[2150]: 2026-01-17 00:01:14.644 [INFO][5055] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.65/26] IPv6=[] ContainerID="cb58af2090b2de8d4f22a131ee88c7b5c22008ae0506bd8e4f971d5868f45779" HandleID="k8s-pod-network.cb58af2090b2de8d4f22a131ee88c7b5c22008ae0506bd8e4f971d5868f45779" Workload="ip--172--31--23--167-k8s-coredns--668d6bf9bc--xjcx7-eth0" Jan 17 00:01:14.713816 containerd[2150]: 2026-01-17 00:01:14.650 [INFO][5034] cni-plugin/k8s.go 418: Populated endpoint ContainerID="cb58af2090b2de8d4f22a131ee88c7b5c22008ae0506bd8e4f971d5868f45779" Namespace="kube-system" Pod="coredns-668d6bf9bc-xjcx7" WorkloadEndpoint="ip--172--31--23--167-k8s-coredns--668d6bf9bc--xjcx7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--167-k8s-coredns--668d6bf9bc--xjcx7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d0504424-3111-46f2-be7a-effe09d60f69", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 0, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-167", ContainerID:"", Pod:"coredns-668d6bf9bc-xjcx7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2c5a7be5d7d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:01:14.713816 containerd[2150]: 2026-01-17 00:01:14.650 [INFO][5034] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.65/32] ContainerID="cb58af2090b2de8d4f22a131ee88c7b5c22008ae0506bd8e4f971d5868f45779" Namespace="kube-system" Pod="coredns-668d6bf9bc-xjcx7" WorkloadEndpoint="ip--172--31--23--167-k8s-coredns--668d6bf9bc--xjcx7-eth0" Jan 17 00:01:14.713816 containerd[2150]: 2026-01-17 00:01:14.650 [INFO][5034] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2c5a7be5d7d ContainerID="cb58af2090b2de8d4f22a131ee88c7b5c22008ae0506bd8e4f971d5868f45779" Namespace="kube-system" Pod="coredns-668d6bf9bc-xjcx7" WorkloadEndpoint="ip--172--31--23--167-k8s-coredns--668d6bf9bc--xjcx7-eth0" Jan 17 00:01:14.713816 containerd[2150]: 2026-01-17 00:01:14.676 [INFO][5034] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="cb58af2090b2de8d4f22a131ee88c7b5c22008ae0506bd8e4f971d5868f45779" Namespace="kube-system" Pod="coredns-668d6bf9bc-xjcx7" WorkloadEndpoint="ip--172--31--23--167-k8s-coredns--668d6bf9bc--xjcx7-eth0" Jan 17 00:01:14.713816 containerd[2150]: 2026-01-17 00:01:14.679 [INFO][5034] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="cb58af2090b2de8d4f22a131ee88c7b5c22008ae0506bd8e4f971d5868f45779" Namespace="kube-system" Pod="coredns-668d6bf9bc-xjcx7" WorkloadEndpoint="ip--172--31--23--167-k8s-coredns--668d6bf9bc--xjcx7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--167-k8s-coredns--668d6bf9bc--xjcx7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d0504424-3111-46f2-be7a-effe09d60f69", ResourceVersion:"1011", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 0, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-167", ContainerID:"cb58af2090b2de8d4f22a131ee88c7b5c22008ae0506bd8e4f971d5868f45779", Pod:"coredns-668d6bf9bc-xjcx7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2c5a7be5d7d", MAC:"5a:02:f7:73:13:dc", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:01:14.713816 containerd[2150]: 2026-01-17 00:01:14.707 [INFO][5034] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="cb58af2090b2de8d4f22a131ee88c7b5c22008ae0506bd8e4f971d5868f45779" Namespace="kube-system" Pod="coredns-668d6bf9bc-xjcx7" WorkloadEndpoint="ip--172--31--23--167-k8s-coredns--668d6bf9bc--xjcx7-eth0" Jan 17 00:01:14.790626 containerd[2150]: time="2026-01-17T00:01:14.781281525Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:01:14.790626 containerd[2150]: time="2026-01-17T00:01:14.781393137Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:01:14.790626 containerd[2150]: time="2026-01-17T00:01:14.781420077Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:14.790626 containerd[2150]: time="2026-01-17T00:01:14.781914813Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:14.864077 systemd-networkd[1691]: cali5cb9c52c6f2: Link UP Jan 17 00:01:14.871855 systemd-networkd[1691]: cali5cb9c52c6f2: Gained carrier Jan 17 00:01:14.992825 containerd[2150]: 2026-01-17 00:01:14.504 [INFO][5045] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 00:01:14.992825 containerd[2150]: 2026-01-17 00:01:14.533 [INFO][5045] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--wxtgw-eth0 calico-apiserver-5fccc8c4dd- calico-apiserver 78e9de4b-ea97-4b48-8f59-1242c0c3be02 1012 0 2026-01-17 00:00:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5fccc8c4dd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-167 calico-apiserver-5fccc8c4dd-wxtgw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali5cb9c52c6f2 [] [] }} ContainerID="03e2949e4dbfaafc539259f5c2bb0e7eccdc6a915635dbb00e0b4ca6cf6753b3" Namespace="calico-apiserver" Pod="calico-apiserver-5fccc8c4dd-wxtgw" WorkloadEndpoint="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--wxtgw-" Jan 17 00:01:14.992825 containerd[2150]: 2026-01-17 00:01:14.533 [INFO][5045] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="03e2949e4dbfaafc539259f5c2bb0e7eccdc6a915635dbb00e0b4ca6cf6753b3" Namespace="calico-apiserver" Pod="calico-apiserver-5fccc8c4dd-wxtgw" WorkloadEndpoint="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--wxtgw-eth0" Jan 17 00:01:14.992825 containerd[2150]: 2026-01-17 00:01:14.634 [INFO][5063] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="03e2949e4dbfaafc539259f5c2bb0e7eccdc6a915635dbb00e0b4ca6cf6753b3" HandleID="k8s-pod-network.03e2949e4dbfaafc539259f5c2bb0e7eccdc6a915635dbb00e0b4ca6cf6753b3" Workload="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--wxtgw-eth0" Jan 17 00:01:14.992825 containerd[2150]: 2026-01-17 00:01:14.638 [INFO][5063] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="03e2949e4dbfaafc539259f5c2bb0e7eccdc6a915635dbb00e0b4ca6cf6753b3" HandleID="k8s-pod-network.03e2949e4dbfaafc539259f5c2bb0e7eccdc6a915635dbb00e0b4ca6cf6753b3" Workload="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--wxtgw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c9890), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-167", "pod":"calico-apiserver-5fccc8c4dd-wxtgw", "timestamp":"2026-01-17 00:01:14.633998192 +0000 UTC"}, Hostname:"ip-172-31-23-167", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:01:14.992825 containerd[2150]: 2026-01-17 00:01:14.638 [INFO][5063] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:01:14.992825 containerd[2150]: 2026-01-17 00:01:14.644 [INFO][5063] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:01:14.992825 containerd[2150]: 2026-01-17 00:01:14.645 [INFO][5063] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-167' Jan 17 00:01:14.992825 containerd[2150]: 2026-01-17 00:01:14.696 [INFO][5063] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.03e2949e4dbfaafc539259f5c2bb0e7eccdc6a915635dbb00e0b4ca6cf6753b3" host="ip-172-31-23-167" Jan 17 00:01:14.992825 containerd[2150]: 2026-01-17 00:01:14.713 [INFO][5063] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-167" Jan 17 00:01:14.992825 containerd[2150]: 2026-01-17 00:01:14.723 [INFO][5063] ipam/ipam.go 511: Trying affinity for 192.168.38.64/26 host="ip-172-31-23-167" Jan 17 00:01:14.992825 containerd[2150]: 2026-01-17 00:01:14.728 [INFO][5063] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.64/26 host="ip-172-31-23-167" Jan 17 00:01:14.992825 containerd[2150]: 2026-01-17 00:01:14.734 [INFO][5063] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.64/26 host="ip-172-31-23-167" Jan 17 00:01:14.992825 containerd[2150]: 2026-01-17 00:01:14.734 [INFO][5063] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.64/26 handle="k8s-pod-network.03e2949e4dbfaafc539259f5c2bb0e7eccdc6a915635dbb00e0b4ca6cf6753b3" host="ip-172-31-23-167" Jan 17 00:01:14.992825 containerd[2150]: 2026-01-17 00:01:14.741 [INFO][5063] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.03e2949e4dbfaafc539259f5c2bb0e7eccdc6a915635dbb00e0b4ca6cf6753b3 Jan 17 00:01:14.992825 containerd[2150]: 2026-01-17 00:01:14.751 [INFO][5063] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.64/26 handle="k8s-pod-network.03e2949e4dbfaafc539259f5c2bb0e7eccdc6a915635dbb00e0b4ca6cf6753b3" host="ip-172-31-23-167" Jan 17 00:01:14.992825 containerd[2150]: 2026-01-17 00:01:14.778 [INFO][5063] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.66/26] block=192.168.38.64/26 handle="k8s-pod-network.03e2949e4dbfaafc539259f5c2bb0e7eccdc6a915635dbb00e0b4ca6cf6753b3" host="ip-172-31-23-167" Jan 17 00:01:14.992825 containerd[2150]: 2026-01-17 00:01:14.779 [INFO][5063] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.66/26] handle="k8s-pod-network.03e2949e4dbfaafc539259f5c2bb0e7eccdc6a915635dbb00e0b4ca6cf6753b3" host="ip-172-31-23-167" Jan 17 00:01:14.992825 containerd[2150]: 2026-01-17 00:01:14.787 [INFO][5063] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:01:14.992825 containerd[2150]: 2026-01-17 00:01:14.787 [INFO][5063] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.66/26] IPv6=[] ContainerID="03e2949e4dbfaafc539259f5c2bb0e7eccdc6a915635dbb00e0b4ca6cf6753b3" HandleID="k8s-pod-network.03e2949e4dbfaafc539259f5c2bb0e7eccdc6a915635dbb00e0b4ca6cf6753b3" Workload="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--wxtgw-eth0" Jan 17 00:01:14.998420 containerd[2150]: 2026-01-17 00:01:14.821 [INFO][5045] cni-plugin/k8s.go 418: Populated endpoint ContainerID="03e2949e4dbfaafc539259f5c2bb0e7eccdc6a915635dbb00e0b4ca6cf6753b3" Namespace="calico-apiserver" Pod="calico-apiserver-5fccc8c4dd-wxtgw" WorkloadEndpoint="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--wxtgw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--wxtgw-eth0", GenerateName:"calico-apiserver-5fccc8c4dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"78e9de4b-ea97-4b48-8f59-1242c0c3be02", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 0, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fccc8c4dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-167", ContainerID:"", Pod:"calico-apiserver-5fccc8c4dd-wxtgw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5cb9c52c6f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:01:14.998420 containerd[2150]: 2026-01-17 00:01:14.822 [INFO][5045] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.66/32] ContainerID="03e2949e4dbfaafc539259f5c2bb0e7eccdc6a915635dbb00e0b4ca6cf6753b3" Namespace="calico-apiserver" Pod="calico-apiserver-5fccc8c4dd-wxtgw" WorkloadEndpoint="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--wxtgw-eth0" Jan 17 00:01:14.998420 containerd[2150]: 2026-01-17 00:01:14.822 [INFO][5045] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5cb9c52c6f2 ContainerID="03e2949e4dbfaafc539259f5c2bb0e7eccdc6a915635dbb00e0b4ca6cf6753b3" Namespace="calico-apiserver" Pod="calico-apiserver-5fccc8c4dd-wxtgw" WorkloadEndpoint="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--wxtgw-eth0" Jan 17 00:01:14.998420 containerd[2150]: 2026-01-17 00:01:14.905 [INFO][5045] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="03e2949e4dbfaafc539259f5c2bb0e7eccdc6a915635dbb00e0b4ca6cf6753b3" Namespace="calico-apiserver" Pod="calico-apiserver-5fccc8c4dd-wxtgw" WorkloadEndpoint="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--wxtgw-eth0" Jan 17 00:01:14.998420 containerd[2150]: 2026-01-17 00:01:14.907 [INFO][5045] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="03e2949e4dbfaafc539259f5c2bb0e7eccdc6a915635dbb00e0b4ca6cf6753b3" Namespace="calico-apiserver" Pod="calico-apiserver-5fccc8c4dd-wxtgw" WorkloadEndpoint="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--wxtgw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--wxtgw-eth0", GenerateName:"calico-apiserver-5fccc8c4dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"78e9de4b-ea97-4b48-8f59-1242c0c3be02", ResourceVersion:"1012", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 0, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fccc8c4dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-167", ContainerID:"03e2949e4dbfaafc539259f5c2bb0e7eccdc6a915635dbb00e0b4ca6cf6753b3", Pod:"calico-apiserver-5fccc8c4dd-wxtgw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5cb9c52c6f2", MAC:"0a:60:40:a5:be:a2", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:01:14.998420 containerd[2150]: 2026-01-17 00:01:14.954 [INFO][5045] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="03e2949e4dbfaafc539259f5c2bb0e7eccdc6a915635dbb00e0b4ca6cf6753b3" Namespace="calico-apiserver" Pod="calico-apiserver-5fccc8c4dd-wxtgw" WorkloadEndpoint="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--wxtgw-eth0" Jan 17 00:01:15.085198 containerd[2150]: time="2026-01-17T00:01:15.084715374Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:01:15.085198 containerd[2150]: time="2026-01-17T00:01:15.084810582Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:01:15.085198 containerd[2150]: time="2026-01-17T00:01:15.084837642Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:15.085198 containerd[2150]: time="2026-01-17T00:01:15.084985722Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:15.121870 containerd[2150]: time="2026-01-17T00:01:15.121358815Z" level=info msg="StopPodSandbox for \"8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3\"" Jan 17 00:01:15.130791 kubelet[3417]: I0117 00:01:15.128396 3417 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e3af7f92-2f69-4868-9102-5ead109a6c2e" path="/var/lib/kubelet/pods/e3af7f92-2f69-4868-9102-5ead109a6c2e/volumes" Jan 17 00:01:15.188260 containerd[2150]: time="2026-01-17T00:01:15.185711263Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-xjcx7,Uid:d0504424-3111-46f2-be7a-effe09d60f69,Namespace:kube-system,Attempt:1,} returns sandbox id \"cb58af2090b2de8d4f22a131ee88c7b5c22008ae0506bd8e4f971d5868f45779\"" Jan 17 00:01:15.200424 containerd[2150]: time="2026-01-17T00:01:15.195644287Z" level=info msg="CreateContainer within sandbox \"cb58af2090b2de8d4f22a131ee88c7b5c22008ae0506bd8e4f971d5868f45779\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:01:15.299920 containerd[2150]: time="2026-01-17T00:01:15.299776268Z" level=info msg="CreateContainer within sandbox \"cb58af2090b2de8d4f22a131ee88c7b5c22008ae0506bd8e4f971d5868f45779\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"267d25aca1cde5258c7e9e25cdde351926a7e20609636d273b0caed28234394b\"" Jan 17 00:01:15.303696 containerd[2150]: time="2026-01-17T00:01:15.303463568Z" level=info msg="StartContainer for \"267d25aca1cde5258c7e9e25cdde351926a7e20609636d273b0caed28234394b\"" Jan 17 00:01:15.575631 containerd[2150]: time="2026-01-17T00:01:15.573428109Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fccc8c4dd-wxtgw,Uid:78e9de4b-ea97-4b48-8f59-1242c0c3be02,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"03e2949e4dbfaafc539259f5c2bb0e7eccdc6a915635dbb00e0b4ca6cf6753b3\"" Jan 17 00:01:15.612735 containerd[2150]: time="2026-01-17T00:01:15.612687549Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:01:15.624262 containerd[2150]: time="2026-01-17T00:01:15.621993729Z" level=info msg="StartContainer for \"267d25aca1cde5258c7e9e25cdde351926a7e20609636d273b0caed28234394b\" returns successfully" Jan 17 00:01:15.698034 containerd[2150]: 2026-01-17 00:01:15.345 [INFO][5209] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" Jan 17 00:01:15.698034 containerd[2150]: 2026-01-17 00:01:15.345 [INFO][5209] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" iface="eth0" netns="/var/run/netns/cni-38648888-3266-5056-1df9-508be08314c6" Jan 17 00:01:15.698034 containerd[2150]: 2026-01-17 00:01:15.348 [INFO][5209] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" iface="eth0" netns="/var/run/netns/cni-38648888-3266-5056-1df9-508be08314c6" Jan 17 00:01:15.698034 containerd[2150]: 2026-01-17 00:01:15.352 [INFO][5209] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" iface="eth0" netns="/var/run/netns/cni-38648888-3266-5056-1df9-508be08314c6" Jan 17 00:01:15.698034 containerd[2150]: 2026-01-17 00:01:15.352 [INFO][5209] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" Jan 17 00:01:15.698034 containerd[2150]: 2026-01-17 00:01:15.352 [INFO][5209] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" Jan 17 00:01:15.698034 containerd[2150]: 2026-01-17 00:01:15.635 [INFO][5239] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" HandleID="k8s-pod-network.8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" Workload="ip--172--31--23--167-k8s-coredns--668d6bf9bc--4l665-eth0" Jan 17 00:01:15.698034 containerd[2150]: 2026-01-17 00:01:15.638 [INFO][5239] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:01:15.698034 containerd[2150]: 2026-01-17 00:01:15.638 [INFO][5239] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:01:15.698034 containerd[2150]: 2026-01-17 00:01:15.671 [WARNING][5239] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" HandleID="k8s-pod-network.8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" Workload="ip--172--31--23--167-k8s-coredns--668d6bf9bc--4l665-eth0" Jan 17 00:01:15.698034 containerd[2150]: 2026-01-17 00:01:15.671 [INFO][5239] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" HandleID="k8s-pod-network.8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" Workload="ip--172--31--23--167-k8s-coredns--668d6bf9bc--4l665-eth0" Jan 17 00:01:15.698034 containerd[2150]: 2026-01-17 00:01:15.678 [INFO][5239] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:01:15.698034 containerd[2150]: 2026-01-17 00:01:15.689 [INFO][5209] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" Jan 17 00:01:15.710607 containerd[2150]: time="2026-01-17T00:01:15.710385010Z" level=info msg="TearDown network for sandbox \"8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3\" successfully" Jan 17 00:01:15.710607 containerd[2150]: time="2026-01-17T00:01:15.710461522Z" level=info msg="StopPodSandbox for \"8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3\" returns successfully" Jan 17 00:01:15.711099 systemd[1]: run-netns-cni\x2d38648888\x2d3266\x2d5056\x2d1df9\x2d508be08314c6.mount: Deactivated successfully. Jan 17 00:01:15.717706 containerd[2150]: time="2026-01-17T00:01:15.714271954Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4l665,Uid:cb54f0b2-682d-402d-a9c8-8c6e24f363be,Namespace:kube-system,Attempt:1,}" Jan 17 00:01:15.876613 kubelet[3417]: I0117 00:01:15.876323 3417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-xjcx7" podStartSLOduration=68.876300646 podStartE2EDuration="1m8.876300646s" podCreationTimestamp="2026-01-17 00:00:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:01:15.820899778 +0000 UTC m=+72.949909863" watchObservedRunningTime="2026-01-17 00:01:15.876300646 +0000 UTC m=+73.005310731" Jan 17 00:01:15.932222 systemd-networkd[1691]: cali5cb9c52c6f2: Gained IPv6LL Jan 17 00:01:15.954881 containerd[2150]: time="2026-01-17T00:01:15.954473303Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:01:15.957623 containerd[2150]: time="2026-01-17T00:01:15.957373259Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:01:15.957623 containerd[2150]: time="2026-01-17T00:01:15.957413555Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:01:15.959910 kubelet[3417]: E0117 00:01:15.959075 3417 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:01:15.959910 kubelet[3417]: E0117 00:01:15.959157 3417 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:01:15.994419 kubelet[3417]: E0117 00:01:15.992960 3417 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dlct9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5fccc8c4dd-wxtgw_calico-apiserver(78e9de4b-ea97-4b48-8f59-1242c0c3be02): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:01:15.997772 kubelet[3417]: E0117 00:01:15.995580 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fccc8c4dd-wxtgw" podUID="78e9de4b-ea97-4b48-8f59-1242c0c3be02" Jan 17 00:01:16.124117 systemd-networkd[1691]: cali2c5a7be5d7d: Gained IPv6LL Jan 17 00:01:16.139316 containerd[2150]: time="2026-01-17T00:01:16.138955928Z" level=info msg="StopPodSandbox for \"61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4\"" Jan 17 00:01:16.329646 systemd-networkd[1691]: cali10176058c93: Link UP Jan 17 00:01:16.346960 systemd-networkd[1691]: cali10176058c93: Gained carrier Jan 17 00:01:16.433973 containerd[2150]: 2026-01-17 00:01:15.931 [INFO][5331] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 00:01:16.433973 containerd[2150]: 2026-01-17 00:01:15.977 [INFO][5331] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--167-k8s-coredns--668d6bf9bc--4l665-eth0 coredns-668d6bf9bc- kube-system cb54f0b2-682d-402d-a9c8-8c6e24f363be 1041 0 2026-01-17 00:00:07 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-23-167 coredns-668d6bf9bc-4l665 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali10176058c93 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="341678dbb8a7223aa8b701b1fceec776cafed8060b39f99bbb4f9540955f3c36" Namespace="kube-system" Pod="coredns-668d6bf9bc-4l665" WorkloadEndpoint="ip--172--31--23--167-k8s-coredns--668d6bf9bc--4l665-" Jan 17 00:01:16.433973 containerd[2150]: 2026-01-17 00:01:15.977 [INFO][5331] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="341678dbb8a7223aa8b701b1fceec776cafed8060b39f99bbb4f9540955f3c36" Namespace="kube-system" Pod="coredns-668d6bf9bc-4l665" WorkloadEndpoint="ip--172--31--23--167-k8s-coredns--668d6bf9bc--4l665-eth0" Jan 17 00:01:16.433973 containerd[2150]: 2026-01-17 00:01:16.150 [INFO][5348] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="341678dbb8a7223aa8b701b1fceec776cafed8060b39f99bbb4f9540955f3c36" HandleID="k8s-pod-network.341678dbb8a7223aa8b701b1fceec776cafed8060b39f99bbb4f9540955f3c36" Workload="ip--172--31--23--167-k8s-coredns--668d6bf9bc--4l665-eth0" Jan 17 00:01:16.433973 containerd[2150]: 2026-01-17 00:01:16.150 [INFO][5348] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="341678dbb8a7223aa8b701b1fceec776cafed8060b39f99bbb4f9540955f3c36" HandleID="k8s-pod-network.341678dbb8a7223aa8b701b1fceec776cafed8060b39f99bbb4f9540955f3c36" Workload="ip--172--31--23--167-k8s-coredns--668d6bf9bc--4l665-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035d8b0), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-23-167", "pod":"coredns-668d6bf9bc-4l665", "timestamp":"2026-01-17 00:01:16.150046748 +0000 UTC"}, Hostname:"ip-172-31-23-167", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:01:16.433973 containerd[2150]: 2026-01-17 00:01:16.151 [INFO][5348] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:01:16.433973 containerd[2150]: 2026-01-17 00:01:16.151 [INFO][5348] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:01:16.433973 containerd[2150]: 2026-01-17 00:01:16.151 [INFO][5348] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-167' Jan 17 00:01:16.433973 containerd[2150]: 2026-01-17 00:01:16.181 [INFO][5348] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.341678dbb8a7223aa8b701b1fceec776cafed8060b39f99bbb4f9540955f3c36" host="ip-172-31-23-167" Jan 17 00:01:16.433973 containerd[2150]: 2026-01-17 00:01:16.200 [INFO][5348] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-167" Jan 17 00:01:16.433973 containerd[2150]: 2026-01-17 00:01:16.214 [INFO][5348] ipam/ipam.go 511: Trying affinity for 192.168.38.64/26 host="ip-172-31-23-167" Jan 17 00:01:16.433973 containerd[2150]: 2026-01-17 00:01:16.222 [INFO][5348] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.64/26 host="ip-172-31-23-167" Jan 17 00:01:16.433973 containerd[2150]: 2026-01-17 00:01:16.227 [INFO][5348] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.64/26 host="ip-172-31-23-167" Jan 17 00:01:16.433973 containerd[2150]: 2026-01-17 00:01:16.228 [INFO][5348] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.64/26 handle="k8s-pod-network.341678dbb8a7223aa8b701b1fceec776cafed8060b39f99bbb4f9540955f3c36" host="ip-172-31-23-167" Jan 17 00:01:16.433973 containerd[2150]: 2026-01-17 00:01:16.239 [INFO][5348] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.341678dbb8a7223aa8b701b1fceec776cafed8060b39f99bbb4f9540955f3c36 Jan 17 00:01:16.433973 containerd[2150]: 2026-01-17 00:01:16.249 [INFO][5348] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.64/26 handle="k8s-pod-network.341678dbb8a7223aa8b701b1fceec776cafed8060b39f99bbb4f9540955f3c36" host="ip-172-31-23-167" Jan 17 00:01:16.433973 containerd[2150]: 2026-01-17 00:01:16.267 [INFO][5348] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.67/26] block=192.168.38.64/26 handle="k8s-pod-network.341678dbb8a7223aa8b701b1fceec776cafed8060b39f99bbb4f9540955f3c36" host="ip-172-31-23-167" Jan 17 00:01:16.433973 containerd[2150]: 2026-01-17 00:01:16.267 [INFO][5348] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.67/26] handle="k8s-pod-network.341678dbb8a7223aa8b701b1fceec776cafed8060b39f99bbb4f9540955f3c36" host="ip-172-31-23-167" Jan 17 00:01:16.433973 containerd[2150]: 2026-01-17 00:01:16.267 [INFO][5348] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:01:16.433973 containerd[2150]: 2026-01-17 00:01:16.267 [INFO][5348] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.67/26] IPv6=[] ContainerID="341678dbb8a7223aa8b701b1fceec776cafed8060b39f99bbb4f9540955f3c36" HandleID="k8s-pod-network.341678dbb8a7223aa8b701b1fceec776cafed8060b39f99bbb4f9540955f3c36" Workload="ip--172--31--23--167-k8s-coredns--668d6bf9bc--4l665-eth0" Jan 17 00:01:16.442770 containerd[2150]: 2026-01-17 00:01:16.287 [INFO][5331] cni-plugin/k8s.go 418: Populated endpoint ContainerID="341678dbb8a7223aa8b701b1fceec776cafed8060b39f99bbb4f9540955f3c36" Namespace="kube-system" Pod="coredns-668d6bf9bc-4l665" WorkloadEndpoint="ip--172--31--23--167-k8s-coredns--668d6bf9bc--4l665-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--167-k8s-coredns--668d6bf9bc--4l665-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"cb54f0b2-682d-402d-a9c8-8c6e24f363be", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 0, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-167", ContainerID:"", Pod:"coredns-668d6bf9bc-4l665", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali10176058c93", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:01:16.442770 containerd[2150]: 2026-01-17 00:01:16.288 [INFO][5331] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.67/32] ContainerID="341678dbb8a7223aa8b701b1fceec776cafed8060b39f99bbb4f9540955f3c36" Namespace="kube-system" Pod="coredns-668d6bf9bc-4l665" WorkloadEndpoint="ip--172--31--23--167-k8s-coredns--668d6bf9bc--4l665-eth0" Jan 17 00:01:16.442770 containerd[2150]: 2026-01-17 00:01:16.288 [INFO][5331] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali10176058c93 ContainerID="341678dbb8a7223aa8b701b1fceec776cafed8060b39f99bbb4f9540955f3c36" Namespace="kube-system" Pod="coredns-668d6bf9bc-4l665" WorkloadEndpoint="ip--172--31--23--167-k8s-coredns--668d6bf9bc--4l665-eth0" Jan 17 00:01:16.442770 containerd[2150]: 2026-01-17 00:01:16.352 [INFO][5331] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="341678dbb8a7223aa8b701b1fceec776cafed8060b39f99bbb4f9540955f3c36" Namespace="kube-system" Pod="coredns-668d6bf9bc-4l665" WorkloadEndpoint="ip--172--31--23--167-k8s-coredns--668d6bf9bc--4l665-eth0" Jan 17 00:01:16.442770 containerd[2150]: 2026-01-17 00:01:16.356 [INFO][5331] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="341678dbb8a7223aa8b701b1fceec776cafed8060b39f99bbb4f9540955f3c36" Namespace="kube-system" Pod="coredns-668d6bf9bc-4l665" WorkloadEndpoint="ip--172--31--23--167-k8s-coredns--668d6bf9bc--4l665-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--167-k8s-coredns--668d6bf9bc--4l665-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"cb54f0b2-682d-402d-a9c8-8c6e24f363be", ResourceVersion:"1041", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 0, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-167", ContainerID:"341678dbb8a7223aa8b701b1fceec776cafed8060b39f99bbb4f9540955f3c36", Pod:"coredns-668d6bf9bc-4l665", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali10176058c93", MAC:"02:64:df:18:d1:0c", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:01:16.442770 containerd[2150]: 2026-01-17 00:01:16.394 [INFO][5331] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="341678dbb8a7223aa8b701b1fceec776cafed8060b39f99bbb4f9540955f3c36" Namespace="kube-system" Pod="coredns-668d6bf9bc-4l665" WorkloadEndpoint="ip--172--31--23--167-k8s-coredns--668d6bf9bc--4l665-eth0" Jan 17 00:01:16.589104 containerd[2150]: time="2026-01-17T00:01:16.587562214Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:01:16.589104 containerd[2150]: time="2026-01-17T00:01:16.587683726Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:01:16.589104 containerd[2150]: time="2026-01-17T00:01:16.587721382Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:16.589104 containerd[2150]: time="2026-01-17T00:01:16.587913574Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:16.715079 containerd[2150]: 2026-01-17 00:01:16.364 [INFO][5365] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" Jan 17 00:01:16.715079 containerd[2150]: 2026-01-17 00:01:16.365 [INFO][5365] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" iface="eth0" netns="/var/run/netns/cni-78de7059-85e3-f9b6-1786-dd7fb9ee5eec" Jan 17 00:01:16.715079 containerd[2150]: 2026-01-17 00:01:16.365 [INFO][5365] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" iface="eth0" netns="/var/run/netns/cni-78de7059-85e3-f9b6-1786-dd7fb9ee5eec" Jan 17 00:01:16.715079 containerd[2150]: 2026-01-17 00:01:16.367 [INFO][5365] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" iface="eth0" netns="/var/run/netns/cni-78de7059-85e3-f9b6-1786-dd7fb9ee5eec" Jan 17 00:01:16.715079 containerd[2150]: 2026-01-17 00:01:16.369 [INFO][5365] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" Jan 17 00:01:16.715079 containerd[2150]: 2026-01-17 00:01:16.372 [INFO][5365] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" Jan 17 00:01:16.715079 containerd[2150]: 2026-01-17 00:01:16.595 [INFO][5381] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" HandleID="k8s-pod-network.61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" Workload="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--j7cxw-eth0" Jan 17 00:01:16.715079 containerd[2150]: 2026-01-17 00:01:16.607 [INFO][5381] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:01:16.715079 containerd[2150]: 2026-01-17 00:01:16.607 [INFO][5381] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:01:16.715079 containerd[2150]: 2026-01-17 00:01:16.653 [WARNING][5381] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" HandleID="k8s-pod-network.61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" Workload="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--j7cxw-eth0" Jan 17 00:01:16.715079 containerd[2150]: 2026-01-17 00:01:16.654 [INFO][5381] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" HandleID="k8s-pod-network.61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" Workload="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--j7cxw-eth0" Jan 17 00:01:16.715079 containerd[2150]: 2026-01-17 00:01:16.658 [INFO][5381] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:01:16.715079 containerd[2150]: 2026-01-17 00:01:16.673 [INFO][5365] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" Jan 17 00:01:16.715079 containerd[2150]: time="2026-01-17T00:01:16.710798123Z" level=info msg="TearDown network for sandbox \"61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4\" successfully" Jan 17 00:01:16.715079 containerd[2150]: time="2026-01-17T00:01:16.710840123Z" level=info msg="StopPodSandbox for \"61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4\" returns successfully" Jan 17 00:01:16.717437 systemd[1]: run-containerd-runc-k8s.io-341678dbb8a7223aa8b701b1fceec776cafed8060b39f99bbb4f9540955f3c36-runc.fVUcNa.mount: Deactivated successfully. Jan 17 00:01:16.740099 systemd[1]: run-netns-cni\x2d78de7059\x2d85e3\x2df9b6\x2d1786\x2ddd7fb9ee5eec.mount: Deactivated successfully. Jan 17 00:01:16.766027 containerd[2150]: time="2026-01-17T00:01:16.765862475Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fccc8c4dd-j7cxw,Uid:5d919700-9b50-4829-84da-97568c603805,Namespace:calico-apiserver,Attempt:1,}" Jan 17 00:01:16.845812 kubelet[3417]: E0117 00:01:16.845477 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fccc8c4dd-wxtgw" podUID="78e9de4b-ea97-4b48-8f59-1242c0c3be02" Jan 17 00:01:16.961739 containerd[2150]: time="2026-01-17T00:01:16.956613108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-4l665,Uid:cb54f0b2-682d-402d-a9c8-8c6e24f363be,Namespace:kube-system,Attempt:1,} returns sandbox id \"341678dbb8a7223aa8b701b1fceec776cafed8060b39f99bbb4f9540955f3c36\"" Jan 17 00:01:17.009094 containerd[2150]: time="2026-01-17T00:01:17.007748156Z" level=info msg="CreateContainer within sandbox \"341678dbb8a7223aa8b701b1fceec776cafed8060b39f99bbb4f9540955f3c36\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 17 00:01:17.085541 containerd[2150]: time="2026-01-17T00:01:17.085436120Z" level=info msg="CreateContainer within sandbox \"341678dbb8a7223aa8b701b1fceec776cafed8060b39f99bbb4f9540955f3c36\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"efa5263776655e6e5b80a2460f61b0bf37e736a54fcad8d97e9f3e212eaf173a\"" Jan 17 00:01:17.089258 containerd[2150]: time="2026-01-17T00:01:17.088866260Z" level=info msg="StartContainer for \"efa5263776655e6e5b80a2460f61b0bf37e736a54fcad8d97e9f3e212eaf173a\"" Jan 17 00:01:17.099497 kernel: bpftool[5486]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 17 00:01:17.144788 containerd[2150]: time="2026-01-17T00:01:17.144102057Z" level=info msg="StopPodSandbox for \"5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138\"" Jan 17 00:01:17.151963 containerd[2150]: time="2026-01-17T00:01:17.150269109Z" level=info msg="StopPodSandbox for \"1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca\"" Jan 17 00:01:17.211830 systemd-resolved[2021]: Under memory pressure, flushing caches. Jan 17 00:01:17.215221 systemd-journald[1604]: Under memory pressure, flushing caches. Jan 17 00:01:17.211932 systemd-resolved[2021]: Flushed all caches. Jan 17 00:01:17.379852 systemd-networkd[1691]: cali6ed590c5016: Link UP Jan 17 00:01:17.382549 systemd-networkd[1691]: cali6ed590c5016: Gained carrier Jan 17 00:01:17.462479 containerd[2150]: 2026-01-17 00:01:16.968 [INFO][5435] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 17 00:01:17.462479 containerd[2150]: 2026-01-17 00:01:17.002 [INFO][5435] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--j7cxw-eth0 calico-apiserver-5fccc8c4dd- calico-apiserver 5d919700-9b50-4829-84da-97568c603805 1061 0 2026-01-17 00:00:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5fccc8c4dd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-23-167 calico-apiserver-5fccc8c4dd-j7cxw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali6ed590c5016 [] [] }} ContainerID="c32784b28a6841d4285ef81d7486f2c8e700fdfa5318735e8593b668d8671875" Namespace="calico-apiserver" Pod="calico-apiserver-5fccc8c4dd-j7cxw" WorkloadEndpoint="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--j7cxw-" Jan 17 00:01:17.462479 containerd[2150]: 2026-01-17 00:01:17.003 [INFO][5435] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c32784b28a6841d4285ef81d7486f2c8e700fdfa5318735e8593b668d8671875" Namespace="calico-apiserver" Pod="calico-apiserver-5fccc8c4dd-j7cxw" WorkloadEndpoint="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--j7cxw-eth0" Jan 17 00:01:17.462479 containerd[2150]: 2026-01-17 00:01:17.132 [INFO][5473] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c32784b28a6841d4285ef81d7486f2c8e700fdfa5318735e8593b668d8671875" HandleID="k8s-pod-network.c32784b28a6841d4285ef81d7486f2c8e700fdfa5318735e8593b668d8671875" Workload="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--j7cxw-eth0" Jan 17 00:01:17.462479 containerd[2150]: 2026-01-17 00:01:17.133 [INFO][5473] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c32784b28a6841d4285ef81d7486f2c8e700fdfa5318735e8593b668d8671875" HandleID="k8s-pod-network.c32784b28a6841d4285ef81d7486f2c8e700fdfa5318735e8593b668d8671875" Workload="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--j7cxw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400035eec0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-23-167", "pod":"calico-apiserver-5fccc8c4dd-j7cxw", "timestamp":"2026-01-17 00:01:17.132900957 +0000 UTC"}, Hostname:"ip-172-31-23-167", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:01:17.462479 containerd[2150]: 2026-01-17 00:01:17.133 [INFO][5473] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:01:17.462479 containerd[2150]: 2026-01-17 00:01:17.133 [INFO][5473] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:01:17.462479 containerd[2150]: 2026-01-17 00:01:17.133 [INFO][5473] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-167' Jan 17 00:01:17.462479 containerd[2150]: 2026-01-17 00:01:17.180 [INFO][5473] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c32784b28a6841d4285ef81d7486f2c8e700fdfa5318735e8593b668d8671875" host="ip-172-31-23-167" Jan 17 00:01:17.462479 containerd[2150]: 2026-01-17 00:01:17.197 [INFO][5473] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-167" Jan 17 00:01:17.462479 containerd[2150]: 2026-01-17 00:01:17.214 [INFO][5473] ipam/ipam.go 511: Trying affinity for 192.168.38.64/26 host="ip-172-31-23-167" Jan 17 00:01:17.462479 containerd[2150]: 2026-01-17 00:01:17.239 [INFO][5473] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.64/26 host="ip-172-31-23-167" Jan 17 00:01:17.462479 containerd[2150]: 2026-01-17 00:01:17.254 [INFO][5473] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.64/26 host="ip-172-31-23-167" Jan 17 00:01:17.462479 containerd[2150]: 2026-01-17 00:01:17.254 [INFO][5473] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.64/26 handle="k8s-pod-network.c32784b28a6841d4285ef81d7486f2c8e700fdfa5318735e8593b668d8671875" host="ip-172-31-23-167" Jan 17 00:01:17.462479 containerd[2150]: 2026-01-17 00:01:17.260 [INFO][5473] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c32784b28a6841d4285ef81d7486f2c8e700fdfa5318735e8593b668d8671875 Jan 17 00:01:17.462479 containerd[2150]: 2026-01-17 00:01:17.288 [INFO][5473] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.64/26 handle="k8s-pod-network.c32784b28a6841d4285ef81d7486f2c8e700fdfa5318735e8593b668d8671875" host="ip-172-31-23-167" Jan 17 00:01:17.462479 containerd[2150]: 2026-01-17 00:01:17.328 [INFO][5473] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.68/26] block=192.168.38.64/26 handle="k8s-pod-network.c32784b28a6841d4285ef81d7486f2c8e700fdfa5318735e8593b668d8671875" host="ip-172-31-23-167" Jan 17 00:01:17.462479 containerd[2150]: 2026-01-17 00:01:17.328 [INFO][5473] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.68/26] handle="k8s-pod-network.c32784b28a6841d4285ef81d7486f2c8e700fdfa5318735e8593b668d8671875" host="ip-172-31-23-167" Jan 17 00:01:17.462479 containerd[2150]: 2026-01-17 00:01:17.328 [INFO][5473] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:01:17.462479 containerd[2150]: 2026-01-17 00:01:17.328 [INFO][5473] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.68/26] IPv6=[] ContainerID="c32784b28a6841d4285ef81d7486f2c8e700fdfa5318735e8593b668d8671875" HandleID="k8s-pod-network.c32784b28a6841d4285ef81d7486f2c8e700fdfa5318735e8593b668d8671875" Workload="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--j7cxw-eth0" Jan 17 00:01:17.464234 containerd[2150]: 2026-01-17 00:01:17.339 [INFO][5435] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c32784b28a6841d4285ef81d7486f2c8e700fdfa5318735e8593b668d8671875" Namespace="calico-apiserver" Pod="calico-apiserver-5fccc8c4dd-j7cxw" WorkloadEndpoint="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--j7cxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--j7cxw-eth0", GenerateName:"calico-apiserver-5fccc8c4dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d919700-9b50-4829-84da-97568c603805", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 0, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fccc8c4dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-167", ContainerID:"", Pod:"calico-apiserver-5fccc8c4dd-j7cxw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6ed590c5016", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:01:17.464234 containerd[2150]: 2026-01-17 00:01:17.339 [INFO][5435] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.68/32] ContainerID="c32784b28a6841d4285ef81d7486f2c8e700fdfa5318735e8593b668d8671875" Namespace="calico-apiserver" Pod="calico-apiserver-5fccc8c4dd-j7cxw" WorkloadEndpoint="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--j7cxw-eth0" Jan 17 00:01:17.464234 containerd[2150]: 2026-01-17 00:01:17.339 [INFO][5435] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali6ed590c5016 ContainerID="c32784b28a6841d4285ef81d7486f2c8e700fdfa5318735e8593b668d8671875" Namespace="calico-apiserver" Pod="calico-apiserver-5fccc8c4dd-j7cxw" WorkloadEndpoint="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--j7cxw-eth0" Jan 17 00:01:17.464234 containerd[2150]: 2026-01-17 00:01:17.407 [INFO][5435] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c32784b28a6841d4285ef81d7486f2c8e700fdfa5318735e8593b668d8671875" Namespace="calico-apiserver" Pod="calico-apiserver-5fccc8c4dd-j7cxw" WorkloadEndpoint="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--j7cxw-eth0" Jan 17 00:01:17.464234 containerd[2150]: 2026-01-17 00:01:17.408 [INFO][5435] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c32784b28a6841d4285ef81d7486f2c8e700fdfa5318735e8593b668d8671875" Namespace="calico-apiserver" Pod="calico-apiserver-5fccc8c4dd-j7cxw" WorkloadEndpoint="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--j7cxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--j7cxw-eth0", GenerateName:"calico-apiserver-5fccc8c4dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d919700-9b50-4829-84da-97568c603805", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 0, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fccc8c4dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-167", ContainerID:"c32784b28a6841d4285ef81d7486f2c8e700fdfa5318735e8593b668d8671875", Pod:"calico-apiserver-5fccc8c4dd-j7cxw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6ed590c5016", MAC:"de:8f:41:dd:06:e3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:01:17.464234 containerd[2150]: 2026-01-17 00:01:17.441 [INFO][5435] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c32784b28a6841d4285ef81d7486f2c8e700fdfa5318735e8593b668d8671875" Namespace="calico-apiserver" Pod="calico-apiserver-5fccc8c4dd-j7cxw" WorkloadEndpoint="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--j7cxw-eth0" Jan 17 00:01:17.523237 containerd[2150]: time="2026-01-17T00:01:17.523142627Z" level=info msg="StartContainer for \"efa5263776655e6e5b80a2460f61b0bf37e736a54fcad8d97e9f3e212eaf173a\" returns successfully" Jan 17 00:01:17.660347 containerd[2150]: time="2026-01-17T00:01:17.659777819Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:01:17.661184 containerd[2150]: time="2026-01-17T00:01:17.659917643Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:01:17.661184 containerd[2150]: time="2026-01-17T00:01:17.659960051Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:17.661184 containerd[2150]: time="2026-01-17T00:01:17.660154259Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:17.813115 systemd[1]: Started sshd@10-172.31.23.167:22-68.220.241.50:39812.service - OpenSSH per-connection server daemon (68.220.241.50:39812). Jan 17 00:01:17.823529 containerd[2150]: 2026-01-17 00:01:17.447 [INFO][5511] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" Jan 17 00:01:17.823529 containerd[2150]: 2026-01-17 00:01:17.447 [INFO][5511] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" iface="eth0" netns="/var/run/netns/cni-d0729d59-bf30-ed88-9e03-bb4ab9fcaee8" Jan 17 00:01:17.823529 containerd[2150]: 2026-01-17 00:01:17.448 [INFO][5511] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" iface="eth0" netns="/var/run/netns/cni-d0729d59-bf30-ed88-9e03-bb4ab9fcaee8" Jan 17 00:01:17.823529 containerd[2150]: 2026-01-17 00:01:17.452 [INFO][5511] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" iface="eth0" netns="/var/run/netns/cni-d0729d59-bf30-ed88-9e03-bb4ab9fcaee8" Jan 17 00:01:17.823529 containerd[2150]: 2026-01-17 00:01:17.452 [INFO][5511] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" Jan 17 00:01:17.823529 containerd[2150]: 2026-01-17 00:01:17.452 [INFO][5511] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" Jan 17 00:01:17.823529 containerd[2150]: 2026-01-17 00:01:17.670 [INFO][5548] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" HandleID="k8s-pod-network.5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" Workload="ip--172--31--23--167-k8s-goldmane--666569f655--vmx9m-eth0" Jan 17 00:01:17.823529 containerd[2150]: 2026-01-17 00:01:17.683 [INFO][5548] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:01:17.823529 containerd[2150]: 2026-01-17 00:01:17.684 [INFO][5548] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:01:17.823529 containerd[2150]: 2026-01-17 00:01:17.761 [WARNING][5548] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" HandleID="k8s-pod-network.5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" Workload="ip--172--31--23--167-k8s-goldmane--666569f655--vmx9m-eth0" Jan 17 00:01:17.823529 containerd[2150]: 2026-01-17 00:01:17.761 [INFO][5548] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" HandleID="k8s-pod-network.5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" Workload="ip--172--31--23--167-k8s-goldmane--666569f655--vmx9m-eth0" Jan 17 00:01:17.823529 containerd[2150]: 2026-01-17 00:01:17.766 [INFO][5548] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:01:17.823529 containerd[2150]: 2026-01-17 00:01:17.795 [INFO][5511] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" Jan 17 00:01:17.823529 containerd[2150]: time="2026-01-17T00:01:17.822561504Z" level=info msg="TearDown network for sandbox \"5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138\" successfully" Jan 17 00:01:17.823529 containerd[2150]: time="2026-01-17T00:01:17.822599544Z" level=info msg="StopPodSandbox for \"5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138\" returns successfully" Jan 17 00:01:17.827850 containerd[2150]: time="2026-01-17T00:01:17.824981676Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vmx9m,Uid:e6a5f346-af6c-40f3-8c32-a682e7923b77,Namespace:calico-system,Attempt:1,}" Jan 17 00:01:17.840744 systemd[1]: run-netns-cni\x2dd0729d59\x2dbf30\x2ded88\x2d9e03\x2dbb4ab9fcaee8.mount: Deactivated successfully. Jan 17 00:01:17.923007 containerd[2150]: 2026-01-17 00:01:17.552 [INFO][5524] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" Jan 17 00:01:17.923007 containerd[2150]: 2026-01-17 00:01:17.557 [INFO][5524] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" iface="eth0" netns="/var/run/netns/cni-1e9463b5-ec95-fb5a-3ff0-a161037b8a9e" Jan 17 00:01:17.923007 containerd[2150]: 2026-01-17 00:01:17.558 [INFO][5524] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" iface="eth0" netns="/var/run/netns/cni-1e9463b5-ec95-fb5a-3ff0-a161037b8a9e" Jan 17 00:01:17.923007 containerd[2150]: 2026-01-17 00:01:17.562 [INFO][5524] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" iface="eth0" netns="/var/run/netns/cni-1e9463b5-ec95-fb5a-3ff0-a161037b8a9e" Jan 17 00:01:17.923007 containerd[2150]: 2026-01-17 00:01:17.562 [INFO][5524] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" Jan 17 00:01:17.923007 containerd[2150]: 2026-01-17 00:01:17.562 [INFO][5524] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" Jan 17 00:01:17.923007 containerd[2150]: 2026-01-17 00:01:17.751 [INFO][5572] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" HandleID="k8s-pod-network.1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" Workload="ip--172--31--23--167-k8s-calico--kube--controllers--74f49dc95d--4gk47-eth0" Jan 17 00:01:17.923007 containerd[2150]: 2026-01-17 00:01:17.751 [INFO][5572] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:01:17.923007 containerd[2150]: 2026-01-17 00:01:17.766 [INFO][5572] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:01:17.923007 containerd[2150]: 2026-01-17 00:01:17.818 [WARNING][5572] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" HandleID="k8s-pod-network.1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" Workload="ip--172--31--23--167-k8s-calico--kube--controllers--74f49dc95d--4gk47-eth0" Jan 17 00:01:17.923007 containerd[2150]: 2026-01-17 00:01:17.818 [INFO][5572] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" HandleID="k8s-pod-network.1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" Workload="ip--172--31--23--167-k8s-calico--kube--controllers--74f49dc95d--4gk47-eth0" Jan 17 00:01:17.923007 containerd[2150]: 2026-01-17 00:01:17.833 [INFO][5572] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:01:17.923007 containerd[2150]: 2026-01-17 00:01:17.881 [INFO][5524] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" Jan 17 00:01:17.950472 containerd[2150]: time="2026-01-17T00:01:17.934958137Z" level=info msg="TearDown network for sandbox \"1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca\" successfully" Jan 17 00:01:17.950472 containerd[2150]: time="2026-01-17T00:01:17.935025049Z" level=info msg="StopPodSandbox for \"1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca\" returns successfully" Jan 17 00:01:17.962597 kubelet[3417]: I0117 00:01:17.951983 3417 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-4l665" podStartSLOduration=70.951959557 podStartE2EDuration="1m10.951959557s" podCreationTimestamp="2026-01-17 00:00:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-17 00:01:17.904591788 +0000 UTC m=+75.033601909" watchObservedRunningTime="2026-01-17 00:01:17.951959557 +0000 UTC m=+75.080969642" Jan 17 00:01:17.963303 containerd[2150]: time="2026-01-17T00:01:17.957596929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74f49dc95d-4gk47,Uid:26583054-1df4-4aad-bd58-41f9694f0072,Namespace:calico-system,Attempt:1,}" Jan 17 00:01:17.952659 systemd[1]: run-netns-cni\x2d1e9463b5\x2dec95\x2dfb5a\x2d3ff0\x2da161037b8a9e.mount: Deactivated successfully. Jan 17 00:01:18.123831 containerd[2150]: time="2026-01-17T00:01:18.121755094Z" level=info msg="StopPodSandbox for \"15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027\"" Jan 17 00:01:18.236795 systemd-networkd[1691]: cali10176058c93: Gained IPv6LL Jan 17 00:01:18.440740 (udev-worker)[4921]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:01:18.493778 containerd[2150]: time="2026-01-17T00:01:18.492895115Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5fccc8c4dd-j7cxw,Uid:5d919700-9b50-4829-84da-97568c603805,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"c32784b28a6841d4285ef81d7486f2c8e700fdfa5318735e8593b668d8671875\"" Jan 17 00:01:18.498791 systemd-networkd[1691]: vxlan.calico: Link UP Jan 17 00:01:18.498808 systemd-networkd[1691]: vxlan.calico: Gained carrier Jan 17 00:01:18.507963 containerd[2150]: time="2026-01-17T00:01:18.507812543Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:01:18.538913 sshd[5629]: Accepted publickey for core from 68.220.241.50 port 39812 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:18.545100 sshd[5629]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:18.575870 systemd-logind[2113]: New session 11 of user core. Jan 17 00:01:18.581654 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 17 00:01:18.858635 containerd[2150]: time="2026-01-17T00:01:18.858337849Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:01:18.866710 containerd[2150]: time="2026-01-17T00:01:18.864234325Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:01:18.866710 containerd[2150]: time="2026-01-17T00:01:18.866629165Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:01:18.872324 kubelet[3417]: E0117 00:01:18.867134 3417 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:01:18.872324 kubelet[3417]: E0117 00:01:18.867205 3417 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:01:18.876260 kubelet[3417]: E0117 00:01:18.875941 3417 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xkkn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5fccc8c4dd-j7cxw_calico-apiserver(5d919700-9b50-4829-84da-97568c603805): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:01:18.879537 kubelet[3417]: E0117 00:01:18.878810 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fccc8c4dd-j7cxw" podUID="5d919700-9b50-4829-84da-97568c603805" Jan 17 00:01:18.933287 kubelet[3417]: E0117 00:01:18.931217 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fccc8c4dd-j7cxw" podUID="5d919700-9b50-4829-84da-97568c603805" Jan 17 00:01:19.142129 (udev-worker)[5703]: Network interface NamePolicy= disabled on kernel command line. Jan 17 00:01:19.163206 systemd-networkd[1691]: calid6b7e87d852: Link UP Jan 17 00:01:19.178497 systemd-networkd[1691]: calid6b7e87d852: Gained carrier Jan 17 00:01:19.229790 containerd[2150]: 2026-01-17 00:01:18.751 [INFO][5663] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" Jan 17 00:01:19.229790 containerd[2150]: 2026-01-17 00:01:18.757 [INFO][5663] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" iface="eth0" netns="/var/run/netns/cni-b610ee40-86f1-7390-dfd1-32395d10f437" Jan 17 00:01:19.229790 containerd[2150]: 2026-01-17 00:01:18.759 [INFO][5663] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" iface="eth0" netns="/var/run/netns/cni-b610ee40-86f1-7390-dfd1-32395d10f437" Jan 17 00:01:19.229790 containerd[2150]: 2026-01-17 00:01:18.762 [INFO][5663] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" iface="eth0" netns="/var/run/netns/cni-b610ee40-86f1-7390-dfd1-32395d10f437" Jan 17 00:01:19.229790 containerd[2150]: 2026-01-17 00:01:18.763 [INFO][5663] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" Jan 17 00:01:19.229790 containerd[2150]: 2026-01-17 00:01:18.763 [INFO][5663] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" Jan 17 00:01:19.229790 containerd[2150]: 2026-01-17 00:01:19.087 [INFO][5709] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" HandleID="k8s-pod-network.15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" Workload="ip--172--31--23--167-k8s-csi--node--driver--jjr5r-eth0" Jan 17 00:01:19.229790 containerd[2150]: 2026-01-17 00:01:19.091 [INFO][5709] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:01:19.229790 containerd[2150]: 2026-01-17 00:01:19.091 [INFO][5709] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:01:19.229790 containerd[2150]: 2026-01-17 00:01:19.150 [WARNING][5709] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" HandleID="k8s-pod-network.15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" Workload="ip--172--31--23--167-k8s-csi--node--driver--jjr5r-eth0" Jan 17 00:01:19.229790 containerd[2150]: 2026-01-17 00:01:19.150 [INFO][5709] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" HandleID="k8s-pod-network.15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" Workload="ip--172--31--23--167-k8s-csi--node--driver--jjr5r-eth0" Jan 17 00:01:19.229790 containerd[2150]: 2026-01-17 00:01:19.165 [INFO][5709] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:01:19.229790 containerd[2150]: 2026-01-17 00:01:19.211 [INFO][5663] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" Jan 17 00:01:19.238024 containerd[2150]: time="2026-01-17T00:01:19.233958467Z" level=info msg="TearDown network for sandbox \"15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027\" successfully" Jan 17 00:01:19.238024 containerd[2150]: time="2026-01-17T00:01:19.234048455Z" level=info msg="StopPodSandbox for \"15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027\" returns successfully" Jan 17 00:01:19.259242 containerd[2150]: time="2026-01-17T00:01:19.242390675Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jjr5r,Uid:7326267c-1eb2-4759-b98f-e8dc2742ecd4,Namespace:calico-system,Attempt:1,}" Jan 17 00:01:19.274397 systemd-journald[1604]: Under memory pressure, flushing caches. Jan 17 00:01:19.262194 systemd[1]: run-netns-cni\x2db610ee40\x2d86f1\x2d7390\x2ddfd1\x2d32395d10f437.mount: Deactivated successfully. Jan 17 00:01:19.269171 systemd-resolved[2021]: Under memory pressure, flushing caches. Jan 17 00:01:19.269200 systemd-resolved[2021]: Flushed all caches. Jan 17 00:01:19.315130 containerd[2150]: 2026-01-17 00:01:18.396 [INFO][5643] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--167-k8s-calico--kube--controllers--74f49dc95d--4gk47-eth0 calico-kube-controllers-74f49dc95d- calico-system 26583054-1df4-4aad-bd58-41f9694f0072 1080 0 2026-01-17 00:00:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:74f49dc95d projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-23-167 calico-kube-controllers-74f49dc95d-4gk47 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calid6b7e87d852 [] [] }} ContainerID="886be1cc870d254217038ad4b8cfc88aade44f49f6a6c7179808895b7d5abdb0" Namespace="calico-system" Pod="calico-kube-controllers-74f49dc95d-4gk47" WorkloadEndpoint="ip--172--31--23--167-k8s-calico--kube--controllers--74f49dc95d--4gk47-" Jan 17 00:01:19.315130 containerd[2150]: 2026-01-17 00:01:18.396 [INFO][5643] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="886be1cc870d254217038ad4b8cfc88aade44f49f6a6c7179808895b7d5abdb0" Namespace="calico-system" Pod="calico-kube-controllers-74f49dc95d-4gk47" WorkloadEndpoint="ip--172--31--23--167-k8s-calico--kube--controllers--74f49dc95d--4gk47-eth0" Jan 17 00:01:19.315130 containerd[2150]: 2026-01-17 00:01:18.777 [INFO][5680] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="886be1cc870d254217038ad4b8cfc88aade44f49f6a6c7179808895b7d5abdb0" HandleID="k8s-pod-network.886be1cc870d254217038ad4b8cfc88aade44f49f6a6c7179808895b7d5abdb0" Workload="ip--172--31--23--167-k8s-calico--kube--controllers--74f49dc95d--4gk47-eth0" Jan 17 00:01:19.315130 containerd[2150]: 2026-01-17 00:01:18.782 [INFO][5680] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="886be1cc870d254217038ad4b8cfc88aade44f49f6a6c7179808895b7d5abdb0" HandleID="k8s-pod-network.886be1cc870d254217038ad4b8cfc88aade44f49f6a6c7179808895b7d5abdb0" Workload="ip--172--31--23--167-k8s-calico--kube--controllers--74f49dc95d--4gk47-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003336f0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-167", "pod":"calico-kube-controllers-74f49dc95d-4gk47", "timestamp":"2026-01-17 00:01:18.777260965 +0000 UTC"}, Hostname:"ip-172-31-23-167", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:01:19.315130 containerd[2150]: 2026-01-17 00:01:18.783 [INFO][5680] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:01:19.315130 containerd[2150]: 2026-01-17 00:01:18.783 [INFO][5680] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:01:19.315130 containerd[2150]: 2026-01-17 00:01:18.783 [INFO][5680] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-167' Jan 17 00:01:19.315130 containerd[2150]: 2026-01-17 00:01:18.857 [INFO][5680] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.886be1cc870d254217038ad4b8cfc88aade44f49f6a6c7179808895b7d5abdb0" host="ip-172-31-23-167" Jan 17 00:01:19.315130 containerd[2150]: 2026-01-17 00:01:18.891 [INFO][5680] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-167" Jan 17 00:01:19.315130 containerd[2150]: 2026-01-17 00:01:18.941 [INFO][5680] ipam/ipam.go 511: Trying affinity for 192.168.38.64/26 host="ip-172-31-23-167" Jan 17 00:01:19.315130 containerd[2150]: 2026-01-17 00:01:18.962 [INFO][5680] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.64/26 host="ip-172-31-23-167" Jan 17 00:01:19.315130 containerd[2150]: 2026-01-17 00:01:18.993 [INFO][5680] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.64/26 host="ip-172-31-23-167" Jan 17 00:01:19.315130 containerd[2150]: 2026-01-17 00:01:18.993 [INFO][5680] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.64/26 handle="k8s-pod-network.886be1cc870d254217038ad4b8cfc88aade44f49f6a6c7179808895b7d5abdb0" host="ip-172-31-23-167" Jan 17 00:01:19.315130 containerd[2150]: 2026-01-17 00:01:19.006 [INFO][5680] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.886be1cc870d254217038ad4b8cfc88aade44f49f6a6c7179808895b7d5abdb0 Jan 17 00:01:19.315130 containerd[2150]: 2026-01-17 00:01:19.052 [INFO][5680] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.64/26 handle="k8s-pod-network.886be1cc870d254217038ad4b8cfc88aade44f49f6a6c7179808895b7d5abdb0" host="ip-172-31-23-167" Jan 17 00:01:19.315130 containerd[2150]: 2026-01-17 00:01:19.078 [INFO][5680] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.69/26] block=192.168.38.64/26 handle="k8s-pod-network.886be1cc870d254217038ad4b8cfc88aade44f49f6a6c7179808895b7d5abdb0" host="ip-172-31-23-167" Jan 17 00:01:19.315130 containerd[2150]: 2026-01-17 00:01:19.078 [INFO][5680] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.69/26] handle="k8s-pod-network.886be1cc870d254217038ad4b8cfc88aade44f49f6a6c7179808895b7d5abdb0" host="ip-172-31-23-167" Jan 17 00:01:19.315130 containerd[2150]: 2026-01-17 00:01:19.078 [INFO][5680] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:01:19.315130 containerd[2150]: 2026-01-17 00:01:19.082 [INFO][5680] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.69/26] IPv6=[] ContainerID="886be1cc870d254217038ad4b8cfc88aade44f49f6a6c7179808895b7d5abdb0" HandleID="k8s-pod-network.886be1cc870d254217038ad4b8cfc88aade44f49f6a6c7179808895b7d5abdb0" Workload="ip--172--31--23--167-k8s-calico--kube--controllers--74f49dc95d--4gk47-eth0" Jan 17 00:01:19.327679 containerd[2150]: 2026-01-17 00:01:19.102 [INFO][5643] cni-plugin/k8s.go 418: Populated endpoint ContainerID="886be1cc870d254217038ad4b8cfc88aade44f49f6a6c7179808895b7d5abdb0" Namespace="calico-system" Pod="calico-kube-controllers-74f49dc95d-4gk47" WorkloadEndpoint="ip--172--31--23--167-k8s-calico--kube--controllers--74f49dc95d--4gk47-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--167-k8s-calico--kube--controllers--74f49dc95d--4gk47-eth0", GenerateName:"calico-kube-controllers-74f49dc95d-", Namespace:"calico-system", SelfLink:"", UID:"26583054-1df4-4aad-bd58-41f9694f0072", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 0, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74f49dc95d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-167", ContainerID:"", Pod:"calico-kube-controllers-74f49dc95d-4gk47", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.38.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid6b7e87d852", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:01:19.327679 containerd[2150]: 2026-01-17 00:01:19.105 [INFO][5643] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.69/32] ContainerID="886be1cc870d254217038ad4b8cfc88aade44f49f6a6c7179808895b7d5abdb0" Namespace="calico-system" Pod="calico-kube-controllers-74f49dc95d-4gk47" WorkloadEndpoint="ip--172--31--23--167-k8s-calico--kube--controllers--74f49dc95d--4gk47-eth0" Jan 17 00:01:19.327679 containerd[2150]: 2026-01-17 00:01:19.105 [INFO][5643] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calid6b7e87d852 ContainerID="886be1cc870d254217038ad4b8cfc88aade44f49f6a6c7179808895b7d5abdb0" Namespace="calico-system" Pod="calico-kube-controllers-74f49dc95d-4gk47" WorkloadEndpoint="ip--172--31--23--167-k8s-calico--kube--controllers--74f49dc95d--4gk47-eth0" Jan 17 00:01:19.327679 containerd[2150]: 2026-01-17 00:01:19.190 [INFO][5643] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="886be1cc870d254217038ad4b8cfc88aade44f49f6a6c7179808895b7d5abdb0" Namespace="calico-system" Pod="calico-kube-controllers-74f49dc95d-4gk47" WorkloadEndpoint="ip--172--31--23--167-k8s-calico--kube--controllers--74f49dc95d--4gk47-eth0" Jan 17 00:01:19.327679 containerd[2150]: 2026-01-17 00:01:19.204 [INFO][5643] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="886be1cc870d254217038ad4b8cfc88aade44f49f6a6c7179808895b7d5abdb0" Namespace="calico-system" Pod="calico-kube-controllers-74f49dc95d-4gk47" WorkloadEndpoint="ip--172--31--23--167-k8s-calico--kube--controllers--74f49dc95d--4gk47-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--167-k8s-calico--kube--controllers--74f49dc95d--4gk47-eth0", GenerateName:"calico-kube-controllers-74f49dc95d-", Namespace:"calico-system", SelfLink:"", UID:"26583054-1df4-4aad-bd58-41f9694f0072", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 0, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74f49dc95d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-167", ContainerID:"886be1cc870d254217038ad4b8cfc88aade44f49f6a6c7179808895b7d5abdb0", Pod:"calico-kube-controllers-74f49dc95d-4gk47", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.38.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid6b7e87d852", MAC:"ce:c0:f2:26:bf:06", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:01:19.327679 containerd[2150]: 2026-01-17 00:01:19.289 [INFO][5643] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="886be1cc870d254217038ad4b8cfc88aade44f49f6a6c7179808895b7d5abdb0" Namespace="calico-system" Pod="calico-kube-controllers-74f49dc95d-4gk47" WorkloadEndpoint="ip--172--31--23--167-k8s-calico--kube--controllers--74f49dc95d--4gk47-eth0" Jan 17 00:01:19.323702 systemd-networkd[1691]: cali6ed590c5016: Gained IPv6LL Jan 17 00:01:19.408693 sshd[5629]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:19.440868 systemd[1]: sshd@10-172.31.23.167:22-68.220.241.50:39812.service: Deactivated successfully. Jan 17 00:01:19.456131 systemd[1]: session-11.scope: Deactivated successfully. Jan 17 00:01:19.463137 systemd-logind[2113]: Session 11 logged out. Waiting for processes to exit. Jan 17 00:01:19.473474 systemd-logind[2113]: Removed session 11. Jan 17 00:01:19.494041 systemd[1]: Started sshd@11-172.31.23.167:22-68.220.241.50:39818.service - OpenSSH per-connection server daemon (68.220.241.50:39818). Jan 17 00:01:19.546551 containerd[2150]: time="2026-01-17T00:01:19.545484829Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:01:19.546551 containerd[2150]: time="2026-01-17T00:01:19.545595817Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:01:19.546551 containerd[2150]: time="2026-01-17T00:01:19.545633149Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:19.546551 containerd[2150]: time="2026-01-17T00:01:19.545822917Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:19.604485 systemd-networkd[1691]: cali9519aa4d36b: Link UP Jan 17 00:01:19.613368 systemd-networkd[1691]: cali9519aa4d36b: Gained carrier Jan 17 00:01:19.696611 containerd[2150]: 2026-01-17 00:01:18.728 [INFO][5637] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--167-k8s-goldmane--666569f655--vmx9m-eth0 goldmane-666569f655- calico-system e6a5f346-af6c-40f3-8c32-a682e7923b77 1078 0 2026-01-17 00:00:30 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-23-167 goldmane-666569f655-vmx9m eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali9519aa4d36b [] [] }} ContainerID="556871652d5a2c6e669272e1ab9dc4a9cf940c8b62381f404beb634858f31087" Namespace="calico-system" Pod="goldmane-666569f655-vmx9m" WorkloadEndpoint="ip--172--31--23--167-k8s-goldmane--666569f655--vmx9m-" Jan 17 00:01:19.696611 containerd[2150]: 2026-01-17 00:01:18.728 [INFO][5637] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="556871652d5a2c6e669272e1ab9dc4a9cf940c8b62381f404beb634858f31087" Namespace="calico-system" Pod="goldmane-666569f655-vmx9m" WorkloadEndpoint="ip--172--31--23--167-k8s-goldmane--666569f655--vmx9m-eth0" Jan 17 00:01:19.696611 containerd[2150]: 2026-01-17 00:01:19.341 [INFO][5711] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="556871652d5a2c6e669272e1ab9dc4a9cf940c8b62381f404beb634858f31087" HandleID="k8s-pod-network.556871652d5a2c6e669272e1ab9dc4a9cf940c8b62381f404beb634858f31087" Workload="ip--172--31--23--167-k8s-goldmane--666569f655--vmx9m-eth0" Jan 17 00:01:19.696611 containerd[2150]: 2026-01-17 00:01:19.347 [INFO][5711] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="556871652d5a2c6e669272e1ab9dc4a9cf940c8b62381f404beb634858f31087" HandleID="k8s-pod-network.556871652d5a2c6e669272e1ab9dc4a9cf940c8b62381f404beb634858f31087" Workload="ip--172--31--23--167-k8s-goldmane--666569f655--vmx9m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003147b0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-167", "pod":"goldmane-666569f655-vmx9m", "timestamp":"2026-01-17 00:01:19.341422836 +0000 UTC"}, Hostname:"ip-172-31-23-167", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:01:19.696611 containerd[2150]: 2026-01-17 00:01:19.347 [INFO][5711] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:01:19.696611 containerd[2150]: 2026-01-17 00:01:19.348 [INFO][5711] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:01:19.696611 containerd[2150]: 2026-01-17 00:01:19.348 [INFO][5711] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-167' Jan 17 00:01:19.696611 containerd[2150]: 2026-01-17 00:01:19.400 [INFO][5711] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.556871652d5a2c6e669272e1ab9dc4a9cf940c8b62381f404beb634858f31087" host="ip-172-31-23-167" Jan 17 00:01:19.696611 containerd[2150]: 2026-01-17 00:01:19.428 [INFO][5711] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-167" Jan 17 00:01:19.696611 containerd[2150]: 2026-01-17 00:01:19.441 [INFO][5711] ipam/ipam.go 511: Trying affinity for 192.168.38.64/26 host="ip-172-31-23-167" Jan 17 00:01:19.696611 containerd[2150]: 2026-01-17 00:01:19.456 [INFO][5711] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.64/26 host="ip-172-31-23-167" Jan 17 00:01:19.696611 containerd[2150]: 2026-01-17 00:01:19.490 [INFO][5711] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.64/26 host="ip-172-31-23-167" Jan 17 00:01:19.696611 containerd[2150]: 2026-01-17 00:01:19.490 [INFO][5711] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.64/26 handle="k8s-pod-network.556871652d5a2c6e669272e1ab9dc4a9cf940c8b62381f404beb634858f31087" host="ip-172-31-23-167" Jan 17 00:01:19.696611 containerd[2150]: 2026-01-17 00:01:19.507 [INFO][5711] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.556871652d5a2c6e669272e1ab9dc4a9cf940c8b62381f404beb634858f31087 Jan 17 00:01:19.696611 containerd[2150]: 2026-01-17 00:01:19.524 [INFO][5711] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.64/26 handle="k8s-pod-network.556871652d5a2c6e669272e1ab9dc4a9cf940c8b62381f404beb634858f31087" host="ip-172-31-23-167" Jan 17 00:01:19.696611 containerd[2150]: 2026-01-17 00:01:19.546 [INFO][5711] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.70/26] block=192.168.38.64/26 handle="k8s-pod-network.556871652d5a2c6e669272e1ab9dc4a9cf940c8b62381f404beb634858f31087" host="ip-172-31-23-167" Jan 17 00:01:19.696611 containerd[2150]: 2026-01-17 00:01:19.547 [INFO][5711] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.70/26] handle="k8s-pod-network.556871652d5a2c6e669272e1ab9dc4a9cf940c8b62381f404beb634858f31087" host="ip-172-31-23-167" Jan 17 00:01:19.696611 containerd[2150]: 2026-01-17 00:01:19.547 [INFO][5711] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:01:19.696611 containerd[2150]: 2026-01-17 00:01:19.547 [INFO][5711] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.70/26] IPv6=[] ContainerID="556871652d5a2c6e669272e1ab9dc4a9cf940c8b62381f404beb634858f31087" HandleID="k8s-pod-network.556871652d5a2c6e669272e1ab9dc4a9cf940c8b62381f404beb634858f31087" Workload="ip--172--31--23--167-k8s-goldmane--666569f655--vmx9m-eth0" Jan 17 00:01:19.700027 containerd[2150]: 2026-01-17 00:01:19.581 [INFO][5637] cni-plugin/k8s.go 418: Populated endpoint ContainerID="556871652d5a2c6e669272e1ab9dc4a9cf940c8b62381f404beb634858f31087" Namespace="calico-system" Pod="goldmane-666569f655-vmx9m" WorkloadEndpoint="ip--172--31--23--167-k8s-goldmane--666569f655--vmx9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--167-k8s-goldmane--666569f655--vmx9m-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e6a5f346-af6c-40f3-8c32-a682e7923b77", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 0, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-167", ContainerID:"", Pod:"goldmane-666569f655-vmx9m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.38.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9519aa4d36b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:01:19.700027 containerd[2150]: 2026-01-17 00:01:19.582 [INFO][5637] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.70/32] ContainerID="556871652d5a2c6e669272e1ab9dc4a9cf940c8b62381f404beb634858f31087" Namespace="calico-system" Pod="goldmane-666569f655-vmx9m" WorkloadEndpoint="ip--172--31--23--167-k8s-goldmane--666569f655--vmx9m-eth0" Jan 17 00:01:19.700027 containerd[2150]: 2026-01-17 00:01:19.582 [INFO][5637] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9519aa4d36b ContainerID="556871652d5a2c6e669272e1ab9dc4a9cf940c8b62381f404beb634858f31087" Namespace="calico-system" Pod="goldmane-666569f655-vmx9m" WorkloadEndpoint="ip--172--31--23--167-k8s-goldmane--666569f655--vmx9m-eth0" Jan 17 00:01:19.700027 containerd[2150]: 2026-01-17 00:01:19.622 [INFO][5637] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="556871652d5a2c6e669272e1ab9dc4a9cf940c8b62381f404beb634858f31087" Namespace="calico-system" Pod="goldmane-666569f655-vmx9m" WorkloadEndpoint="ip--172--31--23--167-k8s-goldmane--666569f655--vmx9m-eth0" Jan 17 00:01:19.700027 containerd[2150]: 2026-01-17 00:01:19.629 [INFO][5637] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="556871652d5a2c6e669272e1ab9dc4a9cf940c8b62381f404beb634858f31087" Namespace="calico-system" Pod="goldmane-666569f655-vmx9m" WorkloadEndpoint="ip--172--31--23--167-k8s-goldmane--666569f655--vmx9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--167-k8s-goldmane--666569f655--vmx9m-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e6a5f346-af6c-40f3-8c32-a682e7923b77", ResourceVersion:"1078", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 0, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-167", ContainerID:"556871652d5a2c6e669272e1ab9dc4a9cf940c8b62381f404beb634858f31087", Pod:"goldmane-666569f655-vmx9m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.38.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9519aa4d36b", MAC:"3e:20:a8:3a:25:f0", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:01:19.700027 containerd[2150]: 2026-01-17 00:01:19.684 [INFO][5637] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="556871652d5a2c6e669272e1ab9dc4a9cf940c8b62381f404beb634858f31087" Namespace="calico-system" Pod="goldmane-666569f655-vmx9m" WorkloadEndpoint="ip--172--31--23--167-k8s-goldmane--666569f655--vmx9m-eth0" Jan 17 00:01:19.824122 containerd[2150]: time="2026-01-17T00:01:19.823900646Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:01:19.824969 containerd[2150]: time="2026-01-17T00:01:19.824899790Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:01:19.825205 containerd[2150]: time="2026-01-17T00:01:19.825147590Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:19.829490 containerd[2150]: time="2026-01-17T00:01:19.828327410Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:19.934630 kubelet[3417]: E0117 00:01:19.933835 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fccc8c4dd-j7cxw" podUID="5d919700-9b50-4829-84da-97568c603805" Jan 17 00:01:19.988950 containerd[2150]: time="2026-01-17T00:01:19.984840663Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-74f49dc95d-4gk47,Uid:26583054-1df4-4aad-bd58-41f9694f0072,Namespace:calico-system,Attempt:1,} returns sandbox id \"886be1cc870d254217038ad4b8cfc88aade44f49f6a6c7179808895b7d5abdb0\"" Jan 17 00:01:20.022063 containerd[2150]: time="2026-01-17T00:01:20.021695675Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:01:20.035223 systemd-networkd[1691]: vxlan.calico: Gained IPv6LL Jan 17 00:01:20.085729 systemd-networkd[1691]: cali750295ee434: Link UP Jan 17 00:01:20.097047 systemd-networkd[1691]: cali750295ee434: Gained carrier Jan 17 00:01:20.149285 sshd[5766]: Accepted publickey for core from 68.220.241.50 port 39818 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:20.159866 sshd[5766]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:20.182620 containerd[2150]: 2026-01-17 00:01:19.655 [INFO][5742] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--23--167-k8s-csi--node--driver--jjr5r-eth0 csi-node-driver- calico-system 7326267c-1eb2-4759-b98f-e8dc2742ecd4 1097 0 2026-01-17 00:00:36 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-23-167 csi-node-driver-jjr5r eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali750295ee434 [] [] }} ContainerID="2a71420f8f216c772920940111f5aea2b0a8c9b3933e31cb3bf94e88c8a092bc" Namespace="calico-system" Pod="csi-node-driver-jjr5r" WorkloadEndpoint="ip--172--31--23--167-k8s-csi--node--driver--jjr5r-" Jan 17 00:01:20.182620 containerd[2150]: 2026-01-17 00:01:19.657 [INFO][5742] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2a71420f8f216c772920940111f5aea2b0a8c9b3933e31cb3bf94e88c8a092bc" Namespace="calico-system" Pod="csi-node-driver-jjr5r" WorkloadEndpoint="ip--172--31--23--167-k8s-csi--node--driver--jjr5r-eth0" Jan 17 00:01:20.182620 containerd[2150]: 2026-01-17 00:01:19.793 [INFO][5791] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a71420f8f216c772920940111f5aea2b0a8c9b3933e31cb3bf94e88c8a092bc" HandleID="k8s-pod-network.2a71420f8f216c772920940111f5aea2b0a8c9b3933e31cb3bf94e88c8a092bc" Workload="ip--172--31--23--167-k8s-csi--node--driver--jjr5r-eth0" Jan 17 00:01:20.182620 containerd[2150]: 2026-01-17 00:01:19.795 [INFO][5791] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2a71420f8f216c772920940111f5aea2b0a8c9b3933e31cb3bf94e88c8a092bc" HandleID="k8s-pod-network.2a71420f8f216c772920940111f5aea2b0a8c9b3933e31cb3bf94e88c8a092bc" Workload="ip--172--31--23--167-k8s-csi--node--driver--jjr5r-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40003601d0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-23-167", "pod":"csi-node-driver-jjr5r", "timestamp":"2026-01-17 00:01:19.793935998 +0000 UTC"}, Hostname:"ip-172-31-23-167", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 17 00:01:20.182620 containerd[2150]: 2026-01-17 00:01:19.798 [INFO][5791] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:01:20.182620 containerd[2150]: 2026-01-17 00:01:19.798 [INFO][5791] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:01:20.182620 containerd[2150]: 2026-01-17 00:01:19.798 [INFO][5791] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-23-167' Jan 17 00:01:20.182620 containerd[2150]: 2026-01-17 00:01:19.844 [INFO][5791] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2a71420f8f216c772920940111f5aea2b0a8c9b3933e31cb3bf94e88c8a092bc" host="ip-172-31-23-167" Jan 17 00:01:20.182620 containerd[2150]: 2026-01-17 00:01:19.867 [INFO][5791] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-23-167" Jan 17 00:01:20.182620 containerd[2150]: 2026-01-17 00:01:19.888 [INFO][5791] ipam/ipam.go 511: Trying affinity for 192.168.38.64/26 host="ip-172-31-23-167" Jan 17 00:01:20.182620 containerd[2150]: 2026-01-17 00:01:19.909 [INFO][5791] ipam/ipam.go 158: Attempting to load block cidr=192.168.38.64/26 host="ip-172-31-23-167" Jan 17 00:01:20.182620 containerd[2150]: 2026-01-17 00:01:19.931 [INFO][5791] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.38.64/26 host="ip-172-31-23-167" Jan 17 00:01:20.182620 containerd[2150]: 2026-01-17 00:01:19.931 [INFO][5791] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.38.64/26 handle="k8s-pod-network.2a71420f8f216c772920940111f5aea2b0a8c9b3933e31cb3bf94e88c8a092bc" host="ip-172-31-23-167" Jan 17 00:01:20.182620 containerd[2150]: 2026-01-17 00:01:19.941 [INFO][5791] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2a71420f8f216c772920940111f5aea2b0a8c9b3933e31cb3bf94e88c8a092bc Jan 17 00:01:20.182620 containerd[2150]: 2026-01-17 00:01:19.965 [INFO][5791] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.38.64/26 handle="k8s-pod-network.2a71420f8f216c772920940111f5aea2b0a8c9b3933e31cb3bf94e88c8a092bc" host="ip-172-31-23-167" Jan 17 00:01:20.182620 containerd[2150]: 2026-01-17 00:01:19.996 [INFO][5791] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.38.71/26] block=192.168.38.64/26 handle="k8s-pod-network.2a71420f8f216c772920940111f5aea2b0a8c9b3933e31cb3bf94e88c8a092bc" host="ip-172-31-23-167" Jan 17 00:01:20.182620 containerd[2150]: 2026-01-17 00:01:19.996 [INFO][5791] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.38.71/26] handle="k8s-pod-network.2a71420f8f216c772920940111f5aea2b0a8c9b3933e31cb3bf94e88c8a092bc" host="ip-172-31-23-167" Jan 17 00:01:20.182620 containerd[2150]: 2026-01-17 00:01:19.996 [INFO][5791] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:01:20.182620 containerd[2150]: 2026-01-17 00:01:19.996 [INFO][5791] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.38.71/26] IPv6=[] ContainerID="2a71420f8f216c772920940111f5aea2b0a8c9b3933e31cb3bf94e88c8a092bc" HandleID="k8s-pod-network.2a71420f8f216c772920940111f5aea2b0a8c9b3933e31cb3bf94e88c8a092bc" Workload="ip--172--31--23--167-k8s-csi--node--driver--jjr5r-eth0" Jan 17 00:01:20.195328 containerd[2150]: 2026-01-17 00:01:20.017 [INFO][5742] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2a71420f8f216c772920940111f5aea2b0a8c9b3933e31cb3bf94e88c8a092bc" Namespace="calico-system" Pod="csi-node-driver-jjr5r" WorkloadEndpoint="ip--172--31--23--167-k8s-csi--node--driver--jjr5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--167-k8s-csi--node--driver--jjr5r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7326267c-1eb2-4759-b98f-e8dc2742ecd4", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 0, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-167", ContainerID:"", Pod:"csi-node-driver-jjr5r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.38.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali750295ee434", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:01:20.195328 containerd[2150]: 2026-01-17 00:01:20.027 [INFO][5742] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.38.71/32] ContainerID="2a71420f8f216c772920940111f5aea2b0a8c9b3933e31cb3bf94e88c8a092bc" Namespace="calico-system" Pod="csi-node-driver-jjr5r" WorkloadEndpoint="ip--172--31--23--167-k8s-csi--node--driver--jjr5r-eth0" Jan 17 00:01:20.195328 containerd[2150]: 2026-01-17 00:01:20.032 [INFO][5742] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali750295ee434 ContainerID="2a71420f8f216c772920940111f5aea2b0a8c9b3933e31cb3bf94e88c8a092bc" Namespace="calico-system" Pod="csi-node-driver-jjr5r" WorkloadEndpoint="ip--172--31--23--167-k8s-csi--node--driver--jjr5r-eth0" Jan 17 00:01:20.195328 containerd[2150]: 2026-01-17 00:01:20.077 [INFO][5742] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a71420f8f216c772920940111f5aea2b0a8c9b3933e31cb3bf94e88c8a092bc" Namespace="calico-system" Pod="csi-node-driver-jjr5r" WorkloadEndpoint="ip--172--31--23--167-k8s-csi--node--driver--jjr5r-eth0" Jan 17 00:01:20.195328 containerd[2150]: 2026-01-17 00:01:20.080 [INFO][5742] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2a71420f8f216c772920940111f5aea2b0a8c9b3933e31cb3bf94e88c8a092bc" Namespace="calico-system" Pod="csi-node-driver-jjr5r" WorkloadEndpoint="ip--172--31--23--167-k8s-csi--node--driver--jjr5r-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--167-k8s-csi--node--driver--jjr5r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7326267c-1eb2-4759-b98f-e8dc2742ecd4", ResourceVersion:"1097", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 0, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-167", ContainerID:"2a71420f8f216c772920940111f5aea2b0a8c9b3933e31cb3bf94e88c8a092bc", Pod:"csi-node-driver-jjr5r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.38.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali750295ee434", MAC:"fe:a5:34:aa:eb:ba", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:01:20.195328 containerd[2150]: 2026-01-17 00:01:20.148 [INFO][5742] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2a71420f8f216c772920940111f5aea2b0a8c9b3933e31cb3bf94e88c8a092bc" Namespace="calico-system" Pod="csi-node-driver-jjr5r" WorkloadEndpoint="ip--172--31--23--167-k8s-csi--node--driver--jjr5r-eth0" Jan 17 00:01:20.208527 systemd-logind[2113]: New session 12 of user core. Jan 17 00:01:20.221887 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 17 00:01:20.296984 containerd[2150]: time="2026-01-17T00:01:20.296288640Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 17 00:01:20.296984 containerd[2150]: time="2026-01-17T00:01:20.296392080Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 17 00:01:20.296984 containerd[2150]: time="2026-01-17T00:01:20.296430288Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:20.296984 containerd[2150]: time="2026-01-17T00:01:20.296655612Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 17 00:01:20.418057 containerd[2150]: time="2026-01-17T00:01:20.417738049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-vmx9m,Uid:e6a5f346-af6c-40f3-8c32-a682e7923b77,Namespace:calico-system,Attempt:1,} returns sandbox id \"556871652d5a2c6e669272e1ab9dc4a9cf940c8b62381f404beb634858f31087\"" Jan 17 00:01:20.445793 containerd[2150]: time="2026-01-17T00:01:20.445738477Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:01:20.449905 containerd[2150]: time="2026-01-17T00:01:20.449256193Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:01:20.450736 containerd[2150]: time="2026-01-17T00:01:20.449760745Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:01:20.454226 kubelet[3417]: E0117 00:01:20.453498 3417 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:01:20.454226 kubelet[3417]: E0117 00:01:20.453568 3417 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:01:20.454937 kubelet[3417]: E0117 00:01:20.454350 3417 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2snwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-74f49dc95d-4gk47_calico-system(26583054-1df4-4aad-bd58-41f9694f0072): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:01:20.458800 containerd[2150]: time="2026-01-17T00:01:20.454776409Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:01:20.459217 kubelet[3417]: E0117 00:01:20.459099 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74f49dc95d-4gk47" podUID="26583054-1df4-4aad-bd58-41f9694f0072" Jan 17 00:01:20.535936 containerd[2150]: time="2026-01-17T00:01:20.535861706Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-jjr5r,Uid:7326267c-1eb2-4759-b98f-e8dc2742ecd4,Namespace:calico-system,Attempt:1,} returns sandbox id \"2a71420f8f216c772920940111f5aea2b0a8c9b3933e31cb3bf94e88c8a092bc\"" Jan 17 00:01:20.753926 containerd[2150]: time="2026-01-17T00:01:20.753845571Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:01:20.756097 containerd[2150]: time="2026-01-17T00:01:20.756028431Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:01:20.756204 containerd[2150]: time="2026-01-17T00:01:20.756168519Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:01:20.756521 kubelet[3417]: E0117 00:01:20.756466 3417 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:01:20.757154 kubelet[3417]: E0117 00:01:20.756550 3417 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:01:20.759529 containerd[2150]: time="2026-01-17T00:01:20.756992427Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:01:20.761954 kubelet[3417]: E0117 00:01:20.759472 3417 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9lmlh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vmx9m_calico-system(e6a5f346-af6c-40f3-8c32-a682e7923b77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:01:20.761954 kubelet[3417]: E0117 00:01:20.761862 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vmx9m" podUID="e6a5f346-af6c-40f3-8c32-a682e7923b77" Jan 17 00:01:20.887763 sshd[5766]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:20.896270 systemd[1]: sshd@11-172.31.23.167:22-68.220.241.50:39818.service: Deactivated successfully. Jan 17 00:01:20.905056 systemd[1]: session-12.scope: Deactivated successfully. Jan 17 00:01:20.905262 systemd-logind[2113]: Session 12 logged out. Waiting for processes to exit. Jan 17 00:01:20.908134 systemd-logind[2113]: Removed session 12. Jan 17 00:01:20.929635 kubelet[3417]: E0117 00:01:20.929369 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vmx9m" podUID="e6a5f346-af6c-40f3-8c32-a682e7923b77" Jan 17 00:01:20.947486 kubelet[3417]: E0117 00:01:20.944705 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74f49dc95d-4gk47" podUID="26583054-1df4-4aad-bd58-41f9694f0072" Jan 17 00:01:20.995855 systemd[1]: Started sshd@12-172.31.23.167:22-68.220.241.50:39822.service - OpenSSH per-connection server daemon (68.220.241.50:39822). Jan 17 00:01:21.048854 containerd[2150]: time="2026-01-17T00:01:21.048795708Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:01:21.051202 containerd[2150]: time="2026-01-17T00:01:21.051144768Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:01:21.051431 containerd[2150]: time="2026-01-17T00:01:21.051238092Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:01:21.051830 kubelet[3417]: E0117 00:01:21.051784 3417 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:01:21.052218 kubelet[3417]: E0117 00:01:21.051951 3417 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:01:21.052218 kubelet[3417]: E0117 00:01:21.052137 3417 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s9cl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jjr5r_calico-system(7326267c-1eb2-4759-b98f-e8dc2742ecd4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:01:21.055614 containerd[2150]: time="2026-01-17T00:01:21.055555272Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:01:21.115689 systemd-networkd[1691]: cali9519aa4d36b: Gained IPv6LL Jan 17 00:01:21.179609 systemd-networkd[1691]: calid6b7e87d852: Gained IPv6LL Jan 17 00:01:21.375630 systemd-networkd[1691]: cali750295ee434: Gained IPv6LL Jan 17 00:01:21.509296 containerd[2150]: time="2026-01-17T00:01:21.509216270Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:01:21.511377 containerd[2150]: time="2026-01-17T00:01:21.511311074Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:01:21.511523 containerd[2150]: time="2026-01-17T00:01:21.511472114Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:01:21.511761 kubelet[3417]: E0117 00:01:21.511678 3417 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:01:21.511862 kubelet[3417]: E0117 00:01:21.511759 3417 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:01:21.511991 kubelet[3417]: E0117 00:01:21.511913 3417 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s9cl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jjr5r_calico-system(7326267c-1eb2-4759-b98f-e8dc2742ecd4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:01:21.513254 kubelet[3417]: E0117 00:01:21.513181 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jjr5r" podUID="7326267c-1eb2-4759-b98f-e8dc2742ecd4" Jan 17 00:01:21.541421 sshd[5960]: Accepted publickey for core from 68.220.241.50 port 39822 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:21.544189 sshd[5960]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:21.554034 systemd-logind[2113]: New session 13 of user core. Jan 17 00:01:21.559137 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 17 00:01:21.963948 kubelet[3417]: E0117 00:01:21.963888 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vmx9m" podUID="e6a5f346-af6c-40f3-8c32-a682e7923b77" Jan 17 00:01:21.967112 kubelet[3417]: E0117 00:01:21.966698 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74f49dc95d-4gk47" podUID="26583054-1df4-4aad-bd58-41f9694f0072" Jan 17 00:01:21.968253 kubelet[3417]: E0117 00:01:21.968196 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jjr5r" podUID="7326267c-1eb2-4759-b98f-e8dc2742ecd4" Jan 17 00:01:22.097170 sshd[5960]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:22.107249 systemd-logind[2113]: Session 13 logged out. Waiting for processes to exit. Jan 17 00:01:22.112380 systemd[1]: sshd@12-172.31.23.167:22-68.220.241.50:39822.service: Deactivated successfully. Jan 17 00:01:22.128495 systemd[1]: session-13.scope: Deactivated successfully. Jan 17 00:01:22.132376 systemd-logind[2113]: Removed session 13. Jan 17 00:01:24.969771 ntpd[2095]: Listen normally on 6 vxlan.calico 192.168.38.64:123 Jan 17 00:01:24.971023 ntpd[2095]: 17 Jan 00:01:24 ntpd[2095]: Listen normally on 6 vxlan.calico 192.168.38.64:123 Jan 17 00:01:24.971023 ntpd[2095]: 17 Jan 00:01:24 ntpd[2095]: Listen normally on 7 cali2c5a7be5d7d [fe80::ecee:eeff:feee:eeee%4]:123 Jan 17 00:01:24.971023 ntpd[2095]: 17 Jan 00:01:24 ntpd[2095]: Listen normally on 8 cali5cb9c52c6f2 [fe80::ecee:eeff:feee:eeee%5]:123 Jan 17 00:01:24.971023 ntpd[2095]: 17 Jan 00:01:24 ntpd[2095]: Listen normally on 9 cali10176058c93 [fe80::ecee:eeff:feee:eeee%6]:123 Jan 17 00:01:24.971023 ntpd[2095]: 17 Jan 00:01:24 ntpd[2095]: Listen normally on 10 cali6ed590c5016 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 17 00:01:24.971023 ntpd[2095]: 17 Jan 00:01:24 ntpd[2095]: Listen normally on 11 vxlan.calico [fe80::64b1:55ff:fedf:300c%8]:123 Jan 17 00:01:24.971023 ntpd[2095]: 17 Jan 00:01:24 ntpd[2095]: Listen normally on 12 calid6b7e87d852 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 17 00:01:24.971023 ntpd[2095]: 17 Jan 00:01:24 ntpd[2095]: Listen normally on 13 cali9519aa4d36b [fe80::ecee:eeff:feee:eeee%12]:123 Jan 17 00:01:24.971023 ntpd[2095]: 17 Jan 00:01:24 ntpd[2095]: Listen normally on 14 cali750295ee434 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 17 00:01:24.969894 ntpd[2095]: Listen normally on 7 cali2c5a7be5d7d [fe80::ecee:eeff:feee:eeee%4]:123 Jan 17 00:01:24.969981 ntpd[2095]: Listen normally on 8 cali5cb9c52c6f2 [fe80::ecee:eeff:feee:eeee%5]:123 Jan 17 00:01:24.970047 ntpd[2095]: Listen normally on 9 cali10176058c93 [fe80::ecee:eeff:feee:eeee%6]:123 Jan 17 00:01:24.970112 ntpd[2095]: Listen normally on 10 cali6ed590c5016 [fe80::ecee:eeff:feee:eeee%7]:123 Jan 17 00:01:24.970178 ntpd[2095]: Listen normally on 11 vxlan.calico [fe80::64b1:55ff:fedf:300c%8]:123 Jan 17 00:01:24.970244 ntpd[2095]: Listen normally on 12 calid6b7e87d852 [fe80::ecee:eeff:feee:eeee%11]:123 Jan 17 00:01:24.970312 ntpd[2095]: Listen normally on 13 cali9519aa4d36b [fe80::ecee:eeff:feee:eeee%12]:123 Jan 17 00:01:24.970387 ntpd[2095]: Listen normally on 14 cali750295ee434 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 17 00:01:27.194941 systemd[1]: Started sshd@13-172.31.23.167:22-68.220.241.50:51880.service - OpenSSH per-connection server daemon (68.220.241.50:51880). Jan 17 00:01:27.844682 sshd[5988]: Accepted publickey for core from 68.220.241.50 port 51880 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:27.845840 sshd[5988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:27.853948 systemd-logind[2113]: New session 14 of user core. Jan 17 00:01:27.860954 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 17 00:01:28.345819 sshd[5988]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:28.354930 systemd[1]: sshd@13-172.31.23.167:22-68.220.241.50:51880.service: Deactivated successfully. Jan 17 00:01:28.360273 systemd[1]: session-14.scope: Deactivated successfully. Jan 17 00:01:28.360909 systemd-logind[2113]: Session 14 logged out. Waiting for processes to exit. Jan 17 00:01:28.365019 systemd-logind[2113]: Removed session 14. Jan 17 00:01:32.119883 containerd[2150]: time="2026-01-17T00:01:32.119819195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:01:32.420300 containerd[2150]: time="2026-01-17T00:01:32.420078397Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:01:32.423039 containerd[2150]: time="2026-01-17T00:01:32.422841253Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:01:32.423039 containerd[2150]: time="2026-01-17T00:01:32.422985589Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:01:32.425428 kubelet[3417]: E0117 00:01:32.423919 3417 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:01:32.425428 kubelet[3417]: E0117 00:01:32.424002 3417 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:01:32.425428 kubelet[3417]: E0117 00:01:32.424183 3417 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dlct9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5fccc8c4dd-wxtgw_calico-apiserver(78e9de4b-ea97-4b48-8f59-1242c0c3be02): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:01:32.428402 kubelet[3417]: E0117 00:01:32.428175 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fccc8c4dd-wxtgw" podUID="78e9de4b-ea97-4b48-8f59-1242c0c3be02" Jan 17 00:01:33.120938 containerd[2150]: time="2026-01-17T00:01:33.120872940Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:01:33.401916 containerd[2150]: time="2026-01-17T00:01:33.401862373Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:01:33.404063 containerd[2150]: time="2026-01-17T00:01:33.404003353Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:01:33.404259 containerd[2150]: time="2026-01-17T00:01:33.404135365Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:01:33.405431 kubelet[3417]: E0117 00:01:33.404470 3417 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:01:33.405431 kubelet[3417]: E0117 00:01:33.404554 3417 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:01:33.405431 kubelet[3417]: E0117 00:01:33.404714 3417 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xkkn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5fccc8c4dd-j7cxw_calico-apiserver(5d919700-9b50-4829-84da-97568c603805): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:01:33.406600 kubelet[3417]: E0117 00:01:33.406351 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fccc8c4dd-j7cxw" podUID="5d919700-9b50-4829-84da-97568c603805" Jan 17 00:01:33.437238 systemd[1]: Started sshd@14-172.31.23.167:22-68.220.241.50:52424.service - OpenSSH per-connection server daemon (68.220.241.50:52424). Jan 17 00:01:33.991507 sshd[6012]: Accepted publickey for core from 68.220.241.50 port 52424 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:33.993778 sshd[6012]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:34.004821 systemd-logind[2113]: New session 15 of user core. Jan 17 00:01:34.010036 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 17 00:01:34.494781 sshd[6012]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:34.504988 systemd-logind[2113]: Session 15 logged out. Waiting for processes to exit. Jan 17 00:01:34.506358 systemd[1]: sshd@14-172.31.23.167:22-68.220.241.50:52424.service: Deactivated successfully. Jan 17 00:01:34.513038 systemd[1]: session-15.scope: Deactivated successfully. Jan 17 00:01:34.517066 systemd-logind[2113]: Removed session 15. Jan 17 00:01:34.576943 systemd[1]: Started sshd@15-172.31.23.167:22-68.220.241.50:52440.service - OpenSSH per-connection server daemon (68.220.241.50:52440). Jan 17 00:01:35.081001 sshd[6026]: Accepted publickey for core from 68.220.241.50 port 52440 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:35.083693 sshd[6026]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:35.091892 systemd-logind[2113]: New session 16 of user core. Jan 17 00:01:35.096032 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 17 00:01:35.119344 containerd[2150]: time="2026-01-17T00:01:35.118989434Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:01:35.440521 containerd[2150]: time="2026-01-17T00:01:35.440398300Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:01:35.442640 containerd[2150]: time="2026-01-17T00:01:35.442566532Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:01:35.442640 containerd[2150]: time="2026-01-17T00:01:35.442612720Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:01:35.443131 kubelet[3417]: E0117 00:01:35.442866 3417 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:01:35.443131 kubelet[3417]: E0117 00:01:35.442960 3417 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:01:35.444707 kubelet[3417]: E0117 00:01:35.443160 3417 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2snwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-74f49dc95d-4gk47_calico-system(26583054-1df4-4aad-bd58-41f9694f0072): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:01:35.445022 kubelet[3417]: E0117 00:01:35.444812 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74f49dc95d-4gk47" podUID="26583054-1df4-4aad-bd58-41f9694f0072" Jan 17 00:01:35.824081 sshd[6026]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:35.832359 systemd[1]: sshd@15-172.31.23.167:22-68.220.241.50:52440.service: Deactivated successfully. Jan 17 00:01:35.839965 systemd[1]: session-16.scope: Deactivated successfully. Jan 17 00:01:35.840903 systemd-logind[2113]: Session 16 logged out. Waiting for processes to exit. Jan 17 00:01:35.843132 systemd-logind[2113]: Removed session 16. Jan 17 00:01:35.909911 systemd[1]: Started sshd@16-172.31.23.167:22-68.220.241.50:52446.service - OpenSSH per-connection server daemon (68.220.241.50:52446). Jan 17 00:01:36.119580 containerd[2150]: time="2026-01-17T00:01:36.118825143Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:01:36.419331 sshd[6038]: Accepted publickey for core from 68.220.241.50 port 52446 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:36.422021 sshd[6038]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:36.429982 systemd-logind[2113]: New session 17 of user core. Jan 17 00:01:36.435727 containerd[2150]: time="2026-01-17T00:01:36.435651976Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:01:36.438012 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 17 00:01:36.439214 containerd[2150]: time="2026-01-17T00:01:36.439149161Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:01:36.439516 containerd[2150]: time="2026-01-17T00:01:36.439368077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:01:36.439939 kubelet[3417]: E0117 00:01:36.439889 3417 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:01:36.440148 kubelet[3417]: E0117 00:01:36.440116 3417 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:01:36.440920 kubelet[3417]: E0117 00:01:36.440706 3417 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s9cl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jjr5r_calico-system(7326267c-1eb2-4759-b98f-e8dc2742ecd4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:01:36.441783 containerd[2150]: time="2026-01-17T00:01:36.441437669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:01:36.826786 containerd[2150]: time="2026-01-17T00:01:36.826635354Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:01:36.829121 containerd[2150]: time="2026-01-17T00:01:36.828379506Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:01:36.829121 containerd[2150]: time="2026-01-17T00:01:36.828505242Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:01:36.829751 kubelet[3417]: E0117 00:01:36.828715 3417 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:01:36.829751 kubelet[3417]: E0117 00:01:36.828775 3417 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:01:36.831393 containerd[2150]: time="2026-01-17T00:01:36.830743758Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:01:36.831565 kubelet[3417]: E0117 00:01:36.830900 3417 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9lmlh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vmx9m_calico-system(e6a5f346-af6c-40f3-8c32-a682e7923b77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:01:36.836052 kubelet[3417]: E0117 00:01:36.833118 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vmx9m" podUID="e6a5f346-af6c-40f3-8c32-a682e7923b77" Jan 17 00:01:37.135903 containerd[2150]: time="2026-01-17T00:01:37.134388988Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:01:37.137656 containerd[2150]: time="2026-01-17T00:01:37.135896164Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:01:37.137656 containerd[2150]: time="2026-01-17T00:01:37.136174048Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:01:37.137794 kubelet[3417]: E0117 00:01:37.137125 3417 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:01:37.137794 kubelet[3417]: E0117 00:01:37.137206 3417 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:01:37.137794 kubelet[3417]: E0117 00:01:37.137362 3417 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s9cl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jjr5r_calico-system(7326267c-1eb2-4759-b98f-e8dc2742ecd4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:01:37.142178 kubelet[3417]: E0117 00:01:37.139342 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jjr5r" podUID="7326267c-1eb2-4759-b98f-e8dc2742ecd4" Jan 17 00:01:38.201294 sshd[6038]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:38.221606 systemd[1]: sshd@16-172.31.23.167:22-68.220.241.50:52446.service: Deactivated successfully. Jan 17 00:01:38.233671 systemd[1]: session-17.scope: Deactivated successfully. Jan 17 00:01:38.236993 systemd-logind[2113]: Session 17 logged out. Waiting for processes to exit. Jan 17 00:01:38.241745 systemd-logind[2113]: Removed session 17. Jan 17 00:01:38.289572 systemd[1]: Started sshd@17-172.31.23.167:22-68.220.241.50:52450.service - OpenSSH per-connection server daemon (68.220.241.50:52450). Jan 17 00:01:38.818642 sshd[6062]: Accepted publickey for core from 68.220.241.50 port 52450 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:38.822172 sshd[6062]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:38.836153 systemd-logind[2113]: New session 18 of user core. Jan 17 00:01:38.844048 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 17 00:01:39.806234 sshd[6062]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:39.817841 systemd[1]: sshd@17-172.31.23.167:22-68.220.241.50:52450.service: Deactivated successfully. Jan 17 00:01:39.828953 systemd[1]: session-18.scope: Deactivated successfully. Jan 17 00:01:39.835099 systemd-logind[2113]: Session 18 logged out. Waiting for processes to exit. Jan 17 00:01:39.841174 systemd-logind[2113]: Removed session 18. Jan 17 00:01:39.911356 systemd[1]: Started sshd@18-172.31.23.167:22-68.220.241.50:52466.service - OpenSSH per-connection server daemon (68.220.241.50:52466). Jan 17 00:01:40.470228 sshd[6074]: Accepted publickey for core from 68.220.241.50 port 52466 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:40.473006 sshd[6074]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:40.482214 systemd-logind[2113]: New session 19 of user core. Jan 17 00:01:40.491097 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 17 00:01:40.961805 sshd[6074]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:40.969566 systemd[1]: sshd@18-172.31.23.167:22-68.220.241.50:52466.service: Deactivated successfully. Jan 17 00:01:40.975190 systemd[1]: session-19.scope: Deactivated successfully. Jan 17 00:01:40.975965 systemd-logind[2113]: Session 19 logged out. Waiting for processes to exit. Jan 17 00:01:40.978875 systemd-logind[2113]: Removed session 19. Jan 17 00:01:46.051911 systemd[1]: Started sshd@19-172.31.23.167:22-68.220.241.50:34646.service - OpenSSH per-connection server daemon (68.220.241.50:34646). Jan 17 00:01:46.118820 kubelet[3417]: E0117 00:01:46.118266 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74f49dc95d-4gk47" podUID="26583054-1df4-4aad-bd58-41f9694f0072" Jan 17 00:01:46.122717 kubelet[3417]: E0117 00:01:46.122586 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fccc8c4dd-wxtgw" podUID="78e9de4b-ea97-4b48-8f59-1242c0c3be02" Jan 17 00:01:46.613266 sshd[6118]: Accepted publickey for core from 68.220.241.50 port 34646 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:46.616799 sshd[6118]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:46.626569 systemd-logind[2113]: New session 20 of user core. Jan 17 00:01:46.633131 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 17 00:01:47.119296 sshd[6118]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:47.128598 systemd[1]: sshd@19-172.31.23.167:22-68.220.241.50:34646.service: Deactivated successfully. Jan 17 00:01:47.135057 systemd[1]: session-20.scope: Deactivated successfully. Jan 17 00:01:47.136975 systemd-logind[2113]: Session 20 logged out. Waiting for processes to exit. Jan 17 00:01:47.140361 systemd-logind[2113]: Removed session 20. Jan 17 00:01:48.119821 kubelet[3417]: E0117 00:01:48.118409 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fccc8c4dd-j7cxw" podUID="5d919700-9b50-4829-84da-97568c603805" Jan 17 00:01:49.121369 kubelet[3417]: E0117 00:01:49.120120 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vmx9m" podUID="e6a5f346-af6c-40f3-8c32-a682e7923b77" Jan 17 00:01:50.125152 kubelet[3417]: E0117 00:01:50.123711 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jjr5r" podUID="7326267c-1eb2-4759-b98f-e8dc2742ecd4" Jan 17 00:01:52.198981 systemd[1]: Started sshd@20-172.31.23.167:22-68.220.241.50:34658.service - OpenSSH per-connection server daemon (68.220.241.50:34658). Jan 17 00:01:52.715389 sshd[6132]: Accepted publickey for core from 68.220.241.50 port 34658 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:52.718102 sshd[6132]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:52.726611 systemd-logind[2113]: New session 21 of user core. Jan 17 00:01:52.737092 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 17 00:01:53.291743 sshd[6132]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:53.303938 systemd-logind[2113]: Session 21 logged out. Waiting for processes to exit. Jan 17 00:01:53.307318 systemd[1]: sshd@20-172.31.23.167:22-68.220.241.50:34658.service: Deactivated successfully. Jan 17 00:01:53.315474 systemd[1]: session-21.scope: Deactivated successfully. Jan 17 00:01:53.320373 systemd-logind[2113]: Removed session 21. Jan 17 00:01:57.123767 containerd[2150]: time="2026-01-17T00:01:57.123433223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:01:57.408039 containerd[2150]: time="2026-01-17T00:01:57.407823505Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:01:57.410385 containerd[2150]: time="2026-01-17T00:01:57.410048137Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:01:57.410385 containerd[2150]: time="2026-01-17T00:01:57.410135353Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:01:57.410906 kubelet[3417]: E0117 00:01:57.410824 3417 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:01:57.411530 kubelet[3417]: E0117 00:01:57.410906 3417 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:01:57.411530 kubelet[3417]: E0117 00:01:57.411082 3417 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dlct9,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5fccc8c4dd-wxtgw_calico-apiserver(78e9de4b-ea97-4b48-8f59-1242c0c3be02): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:01:57.412607 kubelet[3417]: E0117 00:01:57.412241 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fccc8c4dd-wxtgw" podUID="78e9de4b-ea97-4b48-8f59-1242c0c3be02" Jan 17 00:01:58.391638 systemd[1]: Started sshd@21-172.31.23.167:22-68.220.241.50:38336.service - OpenSSH per-connection server daemon (68.220.241.50:38336). Jan 17 00:01:58.963571 sshd[6146]: Accepted publickey for core from 68.220.241.50 port 38336 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:01:58.968602 sshd[6146]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:01:58.991824 systemd-logind[2113]: New session 22 of user core. Jan 17 00:01:58.998888 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 17 00:01:59.523358 sshd[6146]: pam_unix(sshd:session): session closed for user core Jan 17 00:01:59.538149 systemd-logind[2113]: Session 22 logged out. Waiting for processes to exit. Jan 17 00:01:59.540336 systemd[1]: sshd@21-172.31.23.167:22-68.220.241.50:38336.service: Deactivated successfully. Jan 17 00:01:59.555269 systemd[1]: session-22.scope: Deactivated successfully. Jan 17 00:01:59.562553 systemd-logind[2113]: Removed session 22. Jan 17 00:02:01.122742 containerd[2150]: time="2026-01-17T00:02:01.121745775Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:02:01.550297 containerd[2150]: time="2026-01-17T00:02:01.550219025Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:02:01.554136 containerd[2150]: time="2026-01-17T00:02:01.552553409Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:02:01.554136 containerd[2150]: time="2026-01-17T00:02:01.552656453Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:02:01.554364 kubelet[3417]: E0117 00:02:01.552893 3417 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:02:01.554364 kubelet[3417]: E0117 00:02:01.552962 3417 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:02:01.554364 kubelet[3417]: E0117 00:02:01.553287 3417 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xkkn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5fccc8c4dd-j7cxw_calico-apiserver(5d919700-9b50-4829-84da-97568c603805): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:02:01.557506 kubelet[3417]: E0117 00:02:01.555599 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fccc8c4dd-j7cxw" podUID="5d919700-9b50-4829-84da-97568c603805" Jan 17 00:02:01.557737 containerd[2150]: time="2026-01-17T00:02:01.555931709Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:02:01.838524 containerd[2150]: time="2026-01-17T00:02:01.837457111Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:02:01.841671 containerd[2150]: time="2026-01-17T00:02:01.841574863Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:02:01.841805 containerd[2150]: time="2026-01-17T00:02:01.841736671Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:02:01.842075 kubelet[3417]: E0117 00:02:01.842005 3417 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:02:01.842170 kubelet[3417]: E0117 00:02:01.842080 3417 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:02:01.842540 kubelet[3417]: E0117 00:02:01.842394 3417 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2snwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-74f49dc95d-4gk47_calico-system(26583054-1df4-4aad-bd58-41f9694f0072): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:02:01.845473 containerd[2150]: time="2026-01-17T00:02:01.844081387Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 17 00:02:01.845625 kubelet[3417]: E0117 00:02:01.844752 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74f49dc95d-4gk47" podUID="26583054-1df4-4aad-bd58-41f9694f0072" Jan 17 00:02:02.146778 containerd[2150]: time="2026-01-17T00:02:02.145979092Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:02:02.149749 containerd[2150]: time="2026-01-17T00:02:02.149596972Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 17 00:02:02.149872 containerd[2150]: time="2026-01-17T00:02:02.149688688Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 17 00:02:02.150088 kubelet[3417]: E0117 00:02:02.150017 3417 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:02:02.150173 kubelet[3417]: E0117 00:02:02.150091 3417 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 17 00:02:02.150369 kubelet[3417]: E0117 00:02:02.150269 3417 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9lmlh,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-vmx9m_calico-system(e6a5f346-af6c-40f3-8c32-a682e7923b77): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 17 00:02:02.152235 kubelet[3417]: E0117 00:02:02.152135 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vmx9m" podUID="e6a5f346-af6c-40f3-8c32-a682e7923b77" Jan 17 00:02:03.132036 containerd[2150]: time="2026-01-17T00:02:03.131910401Z" level=info msg="StopPodSandbox for \"5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138\"" Jan 17 00:02:03.366544 containerd[2150]: 2026-01-17 00:02:03.241 [WARNING][6175] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--167-k8s-goldmane--666569f655--vmx9m-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e6a5f346-af6c-40f3-8c32-a682e7923b77", ResourceVersion:"1437", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 0, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-167", ContainerID:"556871652d5a2c6e669272e1ab9dc4a9cf940c8b62381f404beb634858f31087", Pod:"goldmane-666569f655-vmx9m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.38.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9519aa4d36b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:03.366544 containerd[2150]: 2026-01-17 00:02:03.242 [INFO][6175] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" Jan 17 00:02:03.366544 containerd[2150]: 2026-01-17 00:02:03.242 [INFO][6175] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" iface="eth0" netns="" Jan 17 00:02:03.366544 containerd[2150]: 2026-01-17 00:02:03.242 [INFO][6175] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" Jan 17 00:02:03.366544 containerd[2150]: 2026-01-17 00:02:03.242 [INFO][6175] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" Jan 17 00:02:03.366544 containerd[2150]: 2026-01-17 00:02:03.335 [INFO][6184] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" HandleID="k8s-pod-network.5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" Workload="ip--172--31--23--167-k8s-goldmane--666569f655--vmx9m-eth0" Jan 17 00:02:03.366544 containerd[2150]: 2026-01-17 00:02:03.336 [INFO][6184] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:03.366544 containerd[2150]: 2026-01-17 00:02:03.336 [INFO][6184] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:03.366544 containerd[2150]: 2026-01-17 00:02:03.356 [WARNING][6184] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" HandleID="k8s-pod-network.5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" Workload="ip--172--31--23--167-k8s-goldmane--666569f655--vmx9m-eth0" Jan 17 00:02:03.366544 containerd[2150]: 2026-01-17 00:02:03.356 [INFO][6184] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" HandleID="k8s-pod-network.5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" Workload="ip--172--31--23--167-k8s-goldmane--666569f655--vmx9m-eth0" Jan 17 00:02:03.366544 containerd[2150]: 2026-01-17 00:02:03.358 [INFO][6184] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:03.366544 containerd[2150]: 2026-01-17 00:02:03.363 [INFO][6175] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" Jan 17 00:02:03.366544 containerd[2150]: time="2026-01-17T00:02:03.366368178Z" level=info msg="TearDown network for sandbox \"5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138\" successfully" Jan 17 00:02:03.366544 containerd[2150]: time="2026-01-17T00:02:03.366406146Z" level=info msg="StopPodSandbox for \"5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138\" returns successfully" Jan 17 00:02:03.369512 containerd[2150]: time="2026-01-17T00:02:03.369183618Z" level=info msg="RemovePodSandbox for \"5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138\"" Jan 17 00:02:03.369512 containerd[2150]: time="2026-01-17T00:02:03.369243954Z" level=info msg="Forcibly stopping sandbox \"5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138\"" Jan 17 00:02:03.663508 containerd[2150]: 2026-01-17 00:02:03.479 [WARNING][6198] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--167-k8s-goldmane--666569f655--vmx9m-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"e6a5f346-af6c-40f3-8c32-a682e7923b77", ResourceVersion:"1437", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 0, 30, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-167", ContainerID:"556871652d5a2c6e669272e1ab9dc4a9cf940c8b62381f404beb634858f31087", Pod:"goldmane-666569f655-vmx9m", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.38.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali9519aa4d36b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:03.663508 containerd[2150]: 2026-01-17 00:02:03.481 [INFO][6198] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" Jan 17 00:02:03.663508 containerd[2150]: 2026-01-17 00:02:03.481 [INFO][6198] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" iface="eth0" netns="" Jan 17 00:02:03.663508 containerd[2150]: 2026-01-17 00:02:03.481 [INFO][6198] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" Jan 17 00:02:03.663508 containerd[2150]: 2026-01-17 00:02:03.481 [INFO][6198] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" Jan 17 00:02:03.663508 containerd[2150]: 2026-01-17 00:02:03.618 [INFO][6205] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" HandleID="k8s-pod-network.5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" Workload="ip--172--31--23--167-k8s-goldmane--666569f655--vmx9m-eth0" Jan 17 00:02:03.663508 containerd[2150]: 2026-01-17 00:02:03.624 [INFO][6205] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:03.663508 containerd[2150]: 2026-01-17 00:02:03.624 [INFO][6205] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:03.663508 containerd[2150]: 2026-01-17 00:02:03.646 [WARNING][6205] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" HandleID="k8s-pod-network.5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" Workload="ip--172--31--23--167-k8s-goldmane--666569f655--vmx9m-eth0" Jan 17 00:02:03.663508 containerd[2150]: 2026-01-17 00:02:03.646 [INFO][6205] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" HandleID="k8s-pod-network.5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" Workload="ip--172--31--23--167-k8s-goldmane--666569f655--vmx9m-eth0" Jan 17 00:02:03.663508 containerd[2150]: 2026-01-17 00:02:03.649 [INFO][6205] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:03.663508 containerd[2150]: 2026-01-17 00:02:03.655 [INFO][6198] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138" Jan 17 00:02:03.663508 containerd[2150]: time="2026-01-17T00:02:03.662777360Z" level=info msg="TearDown network for sandbox \"5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138\" successfully" Jan 17 00:02:03.680070 containerd[2150]: time="2026-01-17T00:02:03.679481936Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:02:03.680070 containerd[2150]: time="2026-01-17T00:02:03.679605776Z" level=info msg="RemovePodSandbox \"5f635d8d36e85042720dcee87b4ad88eb41c17ac411f175578b26435dd79f138\" returns successfully" Jan 17 00:02:03.680070 containerd[2150]: time="2026-01-17T00:02:03.680307164Z" level=info msg="StopPodSandbox for \"15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027\"" Jan 17 00:02:03.852133 containerd[2150]: 2026-01-17 00:02:03.779 [WARNING][6219] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--167-k8s-csi--node--driver--jjr5r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7326267c-1eb2-4759-b98f-e8dc2742ecd4", ResourceVersion:"1389", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 0, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-167", ContainerID:"2a71420f8f216c772920940111f5aea2b0a8c9b3933e31cb3bf94e88c8a092bc", Pod:"csi-node-driver-jjr5r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.38.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali750295ee434", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:03.852133 containerd[2150]: 2026-01-17 00:02:03.779 [INFO][6219] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" Jan 17 00:02:03.852133 containerd[2150]: 2026-01-17 00:02:03.779 [INFO][6219] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" iface="eth0" netns="" Jan 17 00:02:03.852133 containerd[2150]: 2026-01-17 00:02:03.779 [INFO][6219] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" Jan 17 00:02:03.852133 containerd[2150]: 2026-01-17 00:02:03.780 [INFO][6219] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" Jan 17 00:02:03.852133 containerd[2150]: 2026-01-17 00:02:03.825 [INFO][6226] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" HandleID="k8s-pod-network.15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" Workload="ip--172--31--23--167-k8s-csi--node--driver--jjr5r-eth0" Jan 17 00:02:03.852133 containerd[2150]: 2026-01-17 00:02:03.826 [INFO][6226] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:03.852133 containerd[2150]: 2026-01-17 00:02:03.826 [INFO][6226] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:03.852133 containerd[2150]: 2026-01-17 00:02:03.840 [WARNING][6226] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" HandleID="k8s-pod-network.15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" Workload="ip--172--31--23--167-k8s-csi--node--driver--jjr5r-eth0" Jan 17 00:02:03.852133 containerd[2150]: 2026-01-17 00:02:03.840 [INFO][6226] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" HandleID="k8s-pod-network.15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" Workload="ip--172--31--23--167-k8s-csi--node--driver--jjr5r-eth0" Jan 17 00:02:03.852133 containerd[2150]: 2026-01-17 00:02:03.842 [INFO][6226] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:03.852133 containerd[2150]: 2026-01-17 00:02:03.846 [INFO][6219] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" Jan 17 00:02:03.853320 containerd[2150]: time="2026-01-17T00:02:03.852624201Z" level=info msg="TearDown network for sandbox \"15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027\" successfully" Jan 17 00:02:03.853320 containerd[2150]: time="2026-01-17T00:02:03.852664089Z" level=info msg="StopPodSandbox for \"15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027\" returns successfully" Jan 17 00:02:03.854771 containerd[2150]: time="2026-01-17T00:02:03.854233929Z" level=info msg="RemovePodSandbox for \"15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027\"" Jan 17 00:02:03.854771 containerd[2150]: time="2026-01-17T00:02:03.854281581Z" level=info msg="Forcibly stopping sandbox \"15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027\"" Jan 17 00:02:04.027359 containerd[2150]: 2026-01-17 00:02:03.934 [WARNING][6241] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--167-k8s-csi--node--driver--jjr5r-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"7326267c-1eb2-4759-b98f-e8dc2742ecd4", ResourceVersion:"1389", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 0, 36, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-167", ContainerID:"2a71420f8f216c772920940111f5aea2b0a8c9b3933e31cb3bf94e88c8a092bc", Pod:"csi-node-driver-jjr5r", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.38.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali750295ee434", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:04.027359 containerd[2150]: 2026-01-17 00:02:03.935 [INFO][6241] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" Jan 17 00:02:04.027359 containerd[2150]: 2026-01-17 00:02:03.935 [INFO][6241] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" iface="eth0" netns="" Jan 17 00:02:04.027359 containerd[2150]: 2026-01-17 00:02:03.935 [INFO][6241] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" Jan 17 00:02:04.027359 containerd[2150]: 2026-01-17 00:02:03.935 [INFO][6241] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" Jan 17 00:02:04.027359 containerd[2150]: 2026-01-17 00:02:03.991 [INFO][6248] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" HandleID="k8s-pod-network.15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" Workload="ip--172--31--23--167-k8s-csi--node--driver--jjr5r-eth0" Jan 17 00:02:04.027359 containerd[2150]: 2026-01-17 00:02:03.991 [INFO][6248] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:04.027359 containerd[2150]: 2026-01-17 00:02:03.991 [INFO][6248] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:04.027359 containerd[2150]: 2026-01-17 00:02:04.009 [WARNING][6248] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" HandleID="k8s-pod-network.15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" Workload="ip--172--31--23--167-k8s-csi--node--driver--jjr5r-eth0" Jan 17 00:02:04.027359 containerd[2150]: 2026-01-17 00:02:04.009 [INFO][6248] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" HandleID="k8s-pod-network.15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" Workload="ip--172--31--23--167-k8s-csi--node--driver--jjr5r-eth0" Jan 17 00:02:04.027359 containerd[2150]: 2026-01-17 00:02:04.012 [INFO][6248] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:04.027359 containerd[2150]: 2026-01-17 00:02:04.020 [INFO][6241] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027" Jan 17 00:02:04.030705 containerd[2150]: time="2026-01-17T00:02:04.029540790Z" level=info msg="TearDown network for sandbox \"15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027\" successfully" Jan 17 00:02:04.038857 containerd[2150]: time="2026-01-17T00:02:04.038687574Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:02:04.040051 containerd[2150]: time="2026-01-17T00:02:04.039055242Z" level=info msg="RemovePodSandbox \"15004fba5231a318a6fcdc71e48f59cc517facede5099a6325e4054bc554b027\" returns successfully" Jan 17 00:02:04.040051 containerd[2150]: time="2026-01-17T00:02:04.039671538Z" level=info msg="StopPodSandbox for \"0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0\"" Jan 17 00:02:04.199494 containerd[2150]: 2026-01-17 00:02:04.111 [WARNING][6263] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" WorkloadEndpoint="ip--172--31--23--167-k8s-whisker--7798df58d7--vkd8q-eth0" Jan 17 00:02:04.199494 containerd[2150]: 2026-01-17 00:02:04.111 [INFO][6263] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" Jan 17 00:02:04.199494 containerd[2150]: 2026-01-17 00:02:04.112 [INFO][6263] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" iface="eth0" netns="" Jan 17 00:02:04.199494 containerd[2150]: 2026-01-17 00:02:04.112 [INFO][6263] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" Jan 17 00:02:04.199494 containerd[2150]: 2026-01-17 00:02:04.112 [INFO][6263] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" Jan 17 00:02:04.199494 containerd[2150]: 2026-01-17 00:02:04.170 [INFO][6270] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" HandleID="k8s-pod-network.0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" Workload="ip--172--31--23--167-k8s-whisker--7798df58d7--vkd8q-eth0" Jan 17 00:02:04.199494 containerd[2150]: 2026-01-17 00:02:04.171 [INFO][6270] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:04.199494 containerd[2150]: 2026-01-17 00:02:04.171 [INFO][6270] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:04.199494 containerd[2150]: 2026-01-17 00:02:04.186 [WARNING][6270] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" HandleID="k8s-pod-network.0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" Workload="ip--172--31--23--167-k8s-whisker--7798df58d7--vkd8q-eth0" Jan 17 00:02:04.199494 containerd[2150]: 2026-01-17 00:02:04.186 [INFO][6270] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" HandleID="k8s-pod-network.0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" Workload="ip--172--31--23--167-k8s-whisker--7798df58d7--vkd8q-eth0" Jan 17 00:02:04.199494 containerd[2150]: 2026-01-17 00:02:04.189 [INFO][6270] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:04.199494 containerd[2150]: 2026-01-17 00:02:04.191 [INFO][6263] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" Jan 17 00:02:04.206527 containerd[2150]: time="2026-01-17T00:02:04.203572986Z" level=info msg="TearDown network for sandbox \"0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0\" successfully" Jan 17 00:02:04.206527 containerd[2150]: time="2026-01-17T00:02:04.203622798Z" level=info msg="StopPodSandbox for \"0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0\" returns successfully" Jan 17 00:02:04.206527 containerd[2150]: time="2026-01-17T00:02:04.205228518Z" level=info msg="RemovePodSandbox for \"0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0\"" Jan 17 00:02:04.206527 containerd[2150]: time="2026-01-17T00:02:04.205281474Z" level=info msg="Forcibly stopping sandbox \"0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0\"" Jan 17 00:02:04.434762 containerd[2150]: 2026-01-17 00:02:04.356 [WARNING][6283] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" WorkloadEndpoint="ip--172--31--23--167-k8s-whisker--7798df58d7--vkd8q-eth0" Jan 17 00:02:04.434762 containerd[2150]: 2026-01-17 00:02:04.356 [INFO][6283] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" Jan 17 00:02:04.434762 containerd[2150]: 2026-01-17 00:02:04.357 [INFO][6283] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" iface="eth0" netns="" Jan 17 00:02:04.434762 containerd[2150]: 2026-01-17 00:02:04.357 [INFO][6283] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" Jan 17 00:02:04.434762 containerd[2150]: 2026-01-17 00:02:04.357 [INFO][6283] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" Jan 17 00:02:04.434762 containerd[2150]: 2026-01-17 00:02:04.408 [INFO][6291] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" HandleID="k8s-pod-network.0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" Workload="ip--172--31--23--167-k8s-whisker--7798df58d7--vkd8q-eth0" Jan 17 00:02:04.434762 containerd[2150]: 2026-01-17 00:02:04.408 [INFO][6291] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:04.434762 containerd[2150]: 2026-01-17 00:02:04.408 [INFO][6291] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:04.434762 containerd[2150]: 2026-01-17 00:02:04.422 [WARNING][6291] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" HandleID="k8s-pod-network.0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" Workload="ip--172--31--23--167-k8s-whisker--7798df58d7--vkd8q-eth0" Jan 17 00:02:04.434762 containerd[2150]: 2026-01-17 00:02:04.423 [INFO][6291] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" HandleID="k8s-pod-network.0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" Workload="ip--172--31--23--167-k8s-whisker--7798df58d7--vkd8q-eth0" Jan 17 00:02:04.434762 containerd[2150]: 2026-01-17 00:02:04.428 [INFO][6291] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:04.434762 containerd[2150]: 2026-01-17 00:02:04.431 [INFO][6283] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0" Jan 17 00:02:04.438843 containerd[2150]: time="2026-01-17T00:02:04.434809436Z" level=info msg="TearDown network for sandbox \"0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0\" successfully" Jan 17 00:02:04.442167 containerd[2150]: time="2026-01-17T00:02:04.442092776Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:02:04.442331 containerd[2150]: time="2026-01-17T00:02:04.442190156Z" level=info msg="RemovePodSandbox \"0cde0c2e95edf2586a93dd557d7254310ee0abaf4168049f29efd56c7f1e1bb0\" returns successfully" Jan 17 00:02:04.442900 containerd[2150]: time="2026-01-17T00:02:04.442840640Z" level=info msg="StopPodSandbox for \"0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60\"" Jan 17 00:02:04.678158 containerd[2150]: 2026-01-17 00:02:04.520 [WARNING][6305] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--167-k8s-coredns--668d6bf9bc--xjcx7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d0504424-3111-46f2-be7a-effe09d60f69", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 0, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-167", ContainerID:"cb58af2090b2de8d4f22a131ee88c7b5c22008ae0506bd8e4f971d5868f45779", Pod:"coredns-668d6bf9bc-xjcx7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2c5a7be5d7d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:04.678158 containerd[2150]: 2026-01-17 00:02:04.525 [INFO][6305] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" Jan 17 00:02:04.678158 containerd[2150]: 2026-01-17 00:02:04.526 [INFO][6305] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" iface="eth0" netns="" Jan 17 00:02:04.678158 containerd[2150]: 2026-01-17 00:02:04.528 [INFO][6305] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" Jan 17 00:02:04.678158 containerd[2150]: 2026-01-17 00:02:04.528 [INFO][6305] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" Jan 17 00:02:04.678158 containerd[2150]: 2026-01-17 00:02:04.625 [INFO][6312] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" HandleID="k8s-pod-network.0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" Workload="ip--172--31--23--167-k8s-coredns--668d6bf9bc--xjcx7-eth0" Jan 17 00:02:04.678158 containerd[2150]: 2026-01-17 00:02:04.627 [INFO][6312] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:04.678158 containerd[2150]: 2026-01-17 00:02:04.628 [INFO][6312] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:04.678158 containerd[2150]: 2026-01-17 00:02:04.655 [WARNING][6312] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" HandleID="k8s-pod-network.0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" Workload="ip--172--31--23--167-k8s-coredns--668d6bf9bc--xjcx7-eth0" Jan 17 00:02:04.678158 containerd[2150]: 2026-01-17 00:02:04.655 [INFO][6312] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" HandleID="k8s-pod-network.0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" Workload="ip--172--31--23--167-k8s-coredns--668d6bf9bc--xjcx7-eth0" Jan 17 00:02:04.678158 containerd[2150]: 2026-01-17 00:02:04.663 [INFO][6312] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:04.678158 containerd[2150]: 2026-01-17 00:02:04.670 [INFO][6305] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" Jan 17 00:02:04.678993 containerd[2150]: time="2026-01-17T00:02:04.678258237Z" level=info msg="TearDown network for sandbox \"0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60\" successfully" Jan 17 00:02:04.678993 containerd[2150]: time="2026-01-17T00:02:04.678298665Z" level=info msg="StopPodSandbox for \"0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60\" returns successfully" Jan 17 00:02:04.680860 containerd[2150]: time="2026-01-17T00:02:04.680787441Z" level=info msg="RemovePodSandbox for \"0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60\"" Jan 17 00:02:04.681026 containerd[2150]: time="2026-01-17T00:02:04.680871429Z" level=info msg="Forcibly stopping sandbox \"0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60\"" Jan 17 00:02:04.753233 systemd[1]: Started sshd@22-172.31.23.167:22-68.220.241.50:54564.service - OpenSSH per-connection server daemon (68.220.241.50:54564). Jan 17 00:02:04.931729 containerd[2150]: 2026-01-17 00:02:04.853 [WARNING][6327] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--167-k8s-coredns--668d6bf9bc--xjcx7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"d0504424-3111-46f2-be7a-effe09d60f69", ResourceVersion:"1050", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 0, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-167", ContainerID:"cb58af2090b2de8d4f22a131ee88c7b5c22008ae0506bd8e4f971d5868f45779", Pod:"coredns-668d6bf9bc-xjcx7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2c5a7be5d7d", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:04.931729 containerd[2150]: 2026-01-17 00:02:04.853 [INFO][6327] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" Jan 17 00:02:04.931729 containerd[2150]: 2026-01-17 00:02:04.854 [INFO][6327] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" iface="eth0" netns="" Jan 17 00:02:04.931729 containerd[2150]: 2026-01-17 00:02:04.854 [INFO][6327] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" Jan 17 00:02:04.931729 containerd[2150]: 2026-01-17 00:02:04.854 [INFO][6327] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" Jan 17 00:02:04.931729 containerd[2150]: 2026-01-17 00:02:04.907 [INFO][6335] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" HandleID="k8s-pod-network.0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" Workload="ip--172--31--23--167-k8s-coredns--668d6bf9bc--xjcx7-eth0" Jan 17 00:02:04.931729 containerd[2150]: 2026-01-17 00:02:04.908 [INFO][6335] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:04.931729 containerd[2150]: 2026-01-17 00:02:04.908 [INFO][6335] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:04.931729 containerd[2150]: 2026-01-17 00:02:04.921 [WARNING][6335] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" HandleID="k8s-pod-network.0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" Workload="ip--172--31--23--167-k8s-coredns--668d6bf9bc--xjcx7-eth0" Jan 17 00:02:04.931729 containerd[2150]: 2026-01-17 00:02:04.921 [INFO][6335] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" HandleID="k8s-pod-network.0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" Workload="ip--172--31--23--167-k8s-coredns--668d6bf9bc--xjcx7-eth0" Jan 17 00:02:04.931729 containerd[2150]: 2026-01-17 00:02:04.924 [INFO][6335] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:04.931729 containerd[2150]: 2026-01-17 00:02:04.927 [INFO][6327] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60" Jan 17 00:02:04.932585 containerd[2150]: time="2026-01-17T00:02:04.931711762Z" level=info msg="TearDown network for sandbox \"0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60\" successfully" Jan 17 00:02:04.939862 containerd[2150]: time="2026-01-17T00:02:04.939717370Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:02:04.940040 containerd[2150]: time="2026-01-17T00:02:04.939915898Z" level=info msg="RemovePodSandbox \"0a480e090455840e0dc30323df9ee2cb57f3bbdaf987954ee3639d50d894eb60\" returns successfully" Jan 17 00:02:04.943230 containerd[2150]: time="2026-01-17T00:02:04.942903178Z" level=info msg="StopPodSandbox for \"8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3\"" Jan 17 00:02:05.127913 containerd[2150]: time="2026-01-17T00:02:05.126537187Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 17 00:02:05.205741 containerd[2150]: 2026-01-17 00:02:05.080 [WARNING][6349] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--167-k8s-coredns--668d6bf9bc--4l665-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"cb54f0b2-682d-402d-a9c8-8c6e24f363be", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 0, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-167", ContainerID:"341678dbb8a7223aa8b701b1fceec776cafed8060b39f99bbb4f9540955f3c36", Pod:"coredns-668d6bf9bc-4l665", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali10176058c93", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:05.205741 containerd[2150]: 2026-01-17 00:02:05.083 [INFO][6349] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" Jan 17 00:02:05.205741 containerd[2150]: 2026-01-17 00:02:05.084 [INFO][6349] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" iface="eth0" netns="" Jan 17 00:02:05.205741 containerd[2150]: 2026-01-17 00:02:05.084 [INFO][6349] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" Jan 17 00:02:05.205741 containerd[2150]: 2026-01-17 00:02:05.085 [INFO][6349] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" Jan 17 00:02:05.205741 containerd[2150]: 2026-01-17 00:02:05.180 [INFO][6356] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" HandleID="k8s-pod-network.8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" Workload="ip--172--31--23--167-k8s-coredns--668d6bf9bc--4l665-eth0" Jan 17 00:02:05.205741 containerd[2150]: 2026-01-17 00:02:05.181 [INFO][6356] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:05.205741 containerd[2150]: 2026-01-17 00:02:05.181 [INFO][6356] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:05.205741 containerd[2150]: 2026-01-17 00:02:05.195 [WARNING][6356] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" HandleID="k8s-pod-network.8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" Workload="ip--172--31--23--167-k8s-coredns--668d6bf9bc--4l665-eth0" Jan 17 00:02:05.205741 containerd[2150]: 2026-01-17 00:02:05.196 [INFO][6356] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" HandleID="k8s-pod-network.8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" Workload="ip--172--31--23--167-k8s-coredns--668d6bf9bc--4l665-eth0" Jan 17 00:02:05.205741 containerd[2150]: 2026-01-17 00:02:05.198 [INFO][6356] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:05.205741 containerd[2150]: 2026-01-17 00:02:05.202 [INFO][6349] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" Jan 17 00:02:05.205741 containerd[2150]: time="2026-01-17T00:02:05.205521571Z" level=info msg="TearDown network for sandbox \"8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3\" successfully" Jan 17 00:02:05.205741 containerd[2150]: time="2026-01-17T00:02:05.205582027Z" level=info msg="StopPodSandbox for \"8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3\" returns successfully" Jan 17 00:02:05.209038 containerd[2150]: time="2026-01-17T00:02:05.207032155Z" level=info msg="RemovePodSandbox for \"8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3\"" Jan 17 00:02:05.209038 containerd[2150]: time="2026-01-17T00:02:05.207080131Z" level=info msg="Forcibly stopping sandbox \"8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3\"" Jan 17 00:02:05.349952 sshd[6318]: Accepted publickey for core from 68.220.241.50 port 54564 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:02:05.360245 sshd[6318]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:02:05.385820 systemd-logind[2113]: New session 23 of user core. Jan 17 00:02:05.394362 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 17 00:02:05.427987 containerd[2150]: time="2026-01-17T00:02:05.426757184Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:02:05.430365 containerd[2150]: time="2026-01-17T00:02:05.429965853Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 17 00:02:05.430531 containerd[2150]: time="2026-01-17T00:02:05.430148265Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 17 00:02:05.434476 kubelet[3417]: E0117 00:02:05.431601 3417 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:02:05.434476 kubelet[3417]: E0117 00:02:05.431673 3417 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 17 00:02:05.434476 kubelet[3417]: E0117 00:02:05.431836 3417 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s9cl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jjr5r_calico-system(7326267c-1eb2-4759-b98f-e8dc2742ecd4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 17 00:02:05.440265 containerd[2150]: time="2026-01-17T00:02:05.435426849Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 17 00:02:05.500886 containerd[2150]: 2026-01-17 00:02:05.331 [WARNING][6369] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--167-k8s-coredns--668d6bf9bc--4l665-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"cb54f0b2-682d-402d-a9c8-8c6e24f363be", ResourceVersion:"1087", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 0, 7, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-167", ContainerID:"341678dbb8a7223aa8b701b1fceec776cafed8060b39f99bbb4f9540955f3c36", Pod:"coredns-668d6bf9bc-4l665", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.38.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali10176058c93", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:05.500886 containerd[2150]: 2026-01-17 00:02:05.333 [INFO][6369] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" Jan 17 00:02:05.500886 containerd[2150]: 2026-01-17 00:02:05.333 [INFO][6369] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" iface="eth0" netns="" Jan 17 00:02:05.500886 containerd[2150]: 2026-01-17 00:02:05.333 [INFO][6369] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" Jan 17 00:02:05.500886 containerd[2150]: 2026-01-17 00:02:05.333 [INFO][6369] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" Jan 17 00:02:05.500886 containerd[2150]: 2026-01-17 00:02:05.454 [INFO][6376] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" HandleID="k8s-pod-network.8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" Workload="ip--172--31--23--167-k8s-coredns--668d6bf9bc--4l665-eth0" Jan 17 00:02:05.500886 containerd[2150]: 2026-01-17 00:02:05.455 [INFO][6376] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:05.500886 containerd[2150]: 2026-01-17 00:02:05.455 [INFO][6376] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:05.500886 containerd[2150]: 2026-01-17 00:02:05.478 [WARNING][6376] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" HandleID="k8s-pod-network.8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" Workload="ip--172--31--23--167-k8s-coredns--668d6bf9bc--4l665-eth0" Jan 17 00:02:05.500886 containerd[2150]: 2026-01-17 00:02:05.478 [INFO][6376] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" HandleID="k8s-pod-network.8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" Workload="ip--172--31--23--167-k8s-coredns--668d6bf9bc--4l665-eth0" Jan 17 00:02:05.500886 containerd[2150]: 2026-01-17 00:02:05.484 [INFO][6376] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:05.500886 containerd[2150]: 2026-01-17 00:02:05.493 [INFO][6369] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3" Jan 17 00:02:05.500886 containerd[2150]: time="2026-01-17T00:02:05.500771193Z" level=info msg="TearDown network for sandbox \"8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3\" successfully" Jan 17 00:02:05.517818 containerd[2150]: time="2026-01-17T00:02:05.517739217Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:02:05.517954 containerd[2150]: time="2026-01-17T00:02:05.517841721Z" level=info msg="RemovePodSandbox \"8fa3796933689d6a32862ca4efb31d8cbfff6da3c1140a869323bce0937588b3\" returns successfully" Jan 17 00:02:05.522586 containerd[2150]: time="2026-01-17T00:02:05.518554509Z" level=info msg="StopPodSandbox for \"61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4\"" Jan 17 00:02:05.759903 containerd[2150]: 2026-01-17 00:02:05.602 [WARNING][6392] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--j7cxw-eth0", GenerateName:"calico-apiserver-5fccc8c4dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d919700-9b50-4829-84da-97568c603805", ResourceVersion:"1434", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 0, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fccc8c4dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-167", ContainerID:"c32784b28a6841d4285ef81d7486f2c8e700fdfa5318735e8593b668d8671875", Pod:"calico-apiserver-5fccc8c4dd-j7cxw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6ed590c5016", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:05.759903 containerd[2150]: 2026-01-17 00:02:05.603 [INFO][6392] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" Jan 17 00:02:05.759903 containerd[2150]: 2026-01-17 00:02:05.603 [INFO][6392] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" iface="eth0" netns="" Jan 17 00:02:05.759903 containerd[2150]: 2026-01-17 00:02:05.603 [INFO][6392] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" Jan 17 00:02:05.759903 containerd[2150]: 2026-01-17 00:02:05.603 [INFO][6392] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" Jan 17 00:02:05.759903 containerd[2150]: 2026-01-17 00:02:05.729 [INFO][6400] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" HandleID="k8s-pod-network.61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" Workload="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--j7cxw-eth0" Jan 17 00:02:05.759903 containerd[2150]: 2026-01-17 00:02:05.729 [INFO][6400] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:05.759903 containerd[2150]: 2026-01-17 00:02:05.730 [INFO][6400] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:05.759903 containerd[2150]: 2026-01-17 00:02:05.744 [WARNING][6400] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" HandleID="k8s-pod-network.61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" Workload="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--j7cxw-eth0" Jan 17 00:02:05.759903 containerd[2150]: 2026-01-17 00:02:05.744 [INFO][6400] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" HandleID="k8s-pod-network.61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" Workload="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--j7cxw-eth0" Jan 17 00:02:05.759903 containerd[2150]: 2026-01-17 00:02:05.748 [INFO][6400] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:05.759903 containerd[2150]: 2026-01-17 00:02:05.755 [INFO][6392] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" Jan 17 00:02:05.759903 containerd[2150]: time="2026-01-17T00:02:05.759764362Z" level=info msg="TearDown network for sandbox \"61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4\" successfully" Jan 17 00:02:05.759903 containerd[2150]: time="2026-01-17T00:02:05.759805018Z" level=info msg="StopPodSandbox for \"61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4\" returns successfully" Jan 17 00:02:05.763724 containerd[2150]: time="2026-01-17T00:02:05.762611374Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:02:05.763724 containerd[2150]: time="2026-01-17T00:02:05.762927598Z" level=info msg="RemovePodSandbox for \"61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4\"" Jan 17 00:02:05.763724 containerd[2150]: time="2026-01-17T00:02:05.763228918Z" level=info msg="Forcibly stopping sandbox \"61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4\"" Jan 17 00:02:05.767748 containerd[2150]: time="2026-01-17T00:02:05.767605654Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 17 00:02:05.768744 containerd[2150]: time="2026-01-17T00:02:05.767940238Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 17 00:02:05.772507 kubelet[3417]: E0117 00:02:05.770976 3417 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:02:05.772507 kubelet[3417]: E0117 00:02:05.771038 3417 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 17 00:02:05.772507 kubelet[3417]: E0117 00:02:05.771194 3417 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-s9cl6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-jjr5r_calico-system(7326267c-1eb2-4759-b98f-e8dc2742ecd4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 17 00:02:05.774798 kubelet[3417]: E0117 00:02:05.774611 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jjr5r" podUID="7326267c-1eb2-4759-b98f-e8dc2742ecd4" Jan 17 00:02:05.990741 containerd[2150]: 2026-01-17 00:02:05.895 [WARNING][6420] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--j7cxw-eth0", GenerateName:"calico-apiserver-5fccc8c4dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"5d919700-9b50-4829-84da-97568c603805", ResourceVersion:"1434", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 0, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fccc8c4dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-167", ContainerID:"c32784b28a6841d4285ef81d7486f2c8e700fdfa5318735e8593b668d8671875", Pod:"calico-apiserver-5fccc8c4dd-j7cxw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali6ed590c5016", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:05.990741 containerd[2150]: 2026-01-17 00:02:05.896 [INFO][6420] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" Jan 17 00:02:05.990741 containerd[2150]: 2026-01-17 00:02:05.896 [INFO][6420] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" iface="eth0" netns="" Jan 17 00:02:05.990741 containerd[2150]: 2026-01-17 00:02:05.896 [INFO][6420] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" Jan 17 00:02:05.990741 containerd[2150]: 2026-01-17 00:02:05.896 [INFO][6420] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" Jan 17 00:02:05.990741 containerd[2150]: 2026-01-17 00:02:05.952 [INFO][6428] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" HandleID="k8s-pod-network.61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" Workload="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--j7cxw-eth0" Jan 17 00:02:05.990741 containerd[2150]: 2026-01-17 00:02:05.953 [INFO][6428] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:05.990741 containerd[2150]: 2026-01-17 00:02:05.953 [INFO][6428] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:05.990741 containerd[2150]: 2026-01-17 00:02:05.978 [WARNING][6428] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" HandleID="k8s-pod-network.61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" Workload="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--j7cxw-eth0" Jan 17 00:02:05.990741 containerd[2150]: 2026-01-17 00:02:05.978 [INFO][6428] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" HandleID="k8s-pod-network.61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" Workload="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--j7cxw-eth0" Jan 17 00:02:05.990741 containerd[2150]: 2026-01-17 00:02:05.983 [INFO][6428] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:05.990741 containerd[2150]: 2026-01-17 00:02:05.986 [INFO][6420] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4" Jan 17 00:02:05.990741 containerd[2150]: time="2026-01-17T00:02:05.989949755Z" level=info msg="TearDown network for sandbox \"61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4\" successfully" Jan 17 00:02:06.000501 containerd[2150]: time="2026-01-17T00:02:05.996904127Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:02:06.000501 containerd[2150]: time="2026-01-17T00:02:05.997162787Z" level=info msg="RemovePodSandbox \"61a54458b1ab75daa91ee64aa9f46d4464951d16179c86a9a2b30951fcb952f4\" returns successfully" Jan 17 00:02:05.997657 sshd[6318]: pam_unix(sshd:session): session closed for user core Jan 17 00:02:06.001640 containerd[2150]: time="2026-01-17T00:02:06.000810139Z" level=info msg="StopPodSandbox for \"1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca\"" Jan 17 00:02:06.015268 systemd[1]: sshd@22-172.31.23.167:22-68.220.241.50:54564.service: Deactivated successfully. Jan 17 00:02:06.032173 systemd[1]: session-23.scope: Deactivated successfully. Jan 17 00:02:06.032729 systemd-logind[2113]: Session 23 logged out. Waiting for processes to exit. Jan 17 00:02:06.042112 systemd-logind[2113]: Removed session 23. Jan 17 00:02:06.313207 containerd[2150]: 2026-01-17 00:02:06.133 [WARNING][6446] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--167-k8s-calico--kube--controllers--74f49dc95d--4gk47-eth0", GenerateName:"calico-kube-controllers-74f49dc95d-", Namespace:"calico-system", SelfLink:"", UID:"26583054-1df4-4aad-bd58-41f9694f0072", ResourceVersion:"1438", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 0, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74f49dc95d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-167", ContainerID:"886be1cc870d254217038ad4b8cfc88aade44f49f6a6c7179808895b7d5abdb0", Pod:"calico-kube-controllers-74f49dc95d-4gk47", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.38.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid6b7e87d852", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:06.313207 containerd[2150]: 2026-01-17 00:02:06.135 [INFO][6446] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" Jan 17 00:02:06.313207 containerd[2150]: 2026-01-17 00:02:06.135 [INFO][6446] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" iface="eth0" netns="" Jan 17 00:02:06.313207 containerd[2150]: 2026-01-17 00:02:06.135 [INFO][6446] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" Jan 17 00:02:06.313207 containerd[2150]: 2026-01-17 00:02:06.135 [INFO][6446] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" Jan 17 00:02:06.313207 containerd[2150]: 2026-01-17 00:02:06.267 [INFO][6454] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" HandleID="k8s-pod-network.1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" Workload="ip--172--31--23--167-k8s-calico--kube--controllers--74f49dc95d--4gk47-eth0" Jan 17 00:02:06.313207 containerd[2150]: 2026-01-17 00:02:06.272 [INFO][6454] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:06.313207 containerd[2150]: 2026-01-17 00:02:06.272 [INFO][6454] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:06.313207 containerd[2150]: 2026-01-17 00:02:06.294 [WARNING][6454] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" HandleID="k8s-pod-network.1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" Workload="ip--172--31--23--167-k8s-calico--kube--controllers--74f49dc95d--4gk47-eth0" Jan 17 00:02:06.313207 containerd[2150]: 2026-01-17 00:02:06.294 [INFO][6454] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" HandleID="k8s-pod-network.1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" Workload="ip--172--31--23--167-k8s-calico--kube--controllers--74f49dc95d--4gk47-eth0" Jan 17 00:02:06.313207 containerd[2150]: 2026-01-17 00:02:06.299 [INFO][6454] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:06.313207 containerd[2150]: 2026-01-17 00:02:06.307 [INFO][6446] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" Jan 17 00:02:06.313963 containerd[2150]: time="2026-01-17T00:02:06.313290561Z" level=info msg="TearDown network for sandbox \"1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca\" successfully" Jan 17 00:02:06.313963 containerd[2150]: time="2026-01-17T00:02:06.313515069Z" level=info msg="StopPodSandbox for \"1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca\" returns successfully" Jan 17 00:02:06.319213 containerd[2150]: time="2026-01-17T00:02:06.318715713Z" level=info msg="RemovePodSandbox for \"1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca\"" Jan 17 00:02:06.319213 containerd[2150]: time="2026-01-17T00:02:06.318777933Z" level=info msg="Forcibly stopping sandbox \"1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca\"" Jan 17 00:02:06.503084 containerd[2150]: 2026-01-17 00:02:06.414 [WARNING][6469] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--167-k8s-calico--kube--controllers--74f49dc95d--4gk47-eth0", GenerateName:"calico-kube-controllers-74f49dc95d-", Namespace:"calico-system", SelfLink:"", UID:"26583054-1df4-4aad-bd58-41f9694f0072", ResourceVersion:"1438", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 0, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"74f49dc95d", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-167", ContainerID:"886be1cc870d254217038ad4b8cfc88aade44f49f6a6c7179808895b7d5abdb0", Pod:"calico-kube-controllers-74f49dc95d-4gk47", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.38.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calid6b7e87d852", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:06.503084 containerd[2150]: 2026-01-17 00:02:06.416 [INFO][6469] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" Jan 17 00:02:06.503084 containerd[2150]: 2026-01-17 00:02:06.416 [INFO][6469] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" iface="eth0" netns="" Jan 17 00:02:06.503084 containerd[2150]: 2026-01-17 00:02:06.416 [INFO][6469] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" Jan 17 00:02:06.503084 containerd[2150]: 2026-01-17 00:02:06.416 [INFO][6469] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" Jan 17 00:02:06.503084 containerd[2150]: 2026-01-17 00:02:06.468 [INFO][6476] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" HandleID="k8s-pod-network.1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" Workload="ip--172--31--23--167-k8s-calico--kube--controllers--74f49dc95d--4gk47-eth0" Jan 17 00:02:06.503084 containerd[2150]: 2026-01-17 00:02:06.469 [INFO][6476] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:06.503084 containerd[2150]: 2026-01-17 00:02:06.469 [INFO][6476] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:06.503084 containerd[2150]: 2026-01-17 00:02:06.491 [WARNING][6476] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" HandleID="k8s-pod-network.1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" Workload="ip--172--31--23--167-k8s-calico--kube--controllers--74f49dc95d--4gk47-eth0" Jan 17 00:02:06.503084 containerd[2150]: 2026-01-17 00:02:06.491 [INFO][6476] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" HandleID="k8s-pod-network.1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" Workload="ip--172--31--23--167-k8s-calico--kube--controllers--74f49dc95d--4gk47-eth0" Jan 17 00:02:06.503084 containerd[2150]: 2026-01-17 00:02:06.495 [INFO][6476] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:06.503084 containerd[2150]: 2026-01-17 00:02:06.499 [INFO][6469] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca" Jan 17 00:02:06.504379 containerd[2150]: time="2026-01-17T00:02:06.503210158Z" level=info msg="TearDown network for sandbox \"1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca\" successfully" Jan 17 00:02:06.510697 containerd[2150]: time="2026-01-17T00:02:06.510600346Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:02:06.510869 containerd[2150]: time="2026-01-17T00:02:06.510701182Z" level=info msg="RemovePodSandbox \"1ee59f46e74ccd67a9aba140e32520c2aea2c0a57b4f83386e3b02f273291cca\" returns successfully" Jan 17 00:02:06.512476 containerd[2150]: time="2026-01-17T00:02:06.511408342Z" level=info msg="StopPodSandbox for \"fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7\"" Jan 17 00:02:06.667355 containerd[2150]: 2026-01-17 00:02:06.590 [WARNING][6490] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--wxtgw-eth0", GenerateName:"calico-apiserver-5fccc8c4dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"78e9de4b-ea97-4b48-8f59-1242c0c3be02", ResourceVersion:"1414", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 0, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fccc8c4dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-167", ContainerID:"03e2949e4dbfaafc539259f5c2bb0e7eccdc6a915635dbb00e0b4ca6cf6753b3", Pod:"calico-apiserver-5fccc8c4dd-wxtgw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5cb9c52c6f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:06.667355 containerd[2150]: 2026-01-17 00:02:06.591 [INFO][6490] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" Jan 17 00:02:06.667355 containerd[2150]: 2026-01-17 00:02:06.591 [INFO][6490] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" iface="eth0" netns="" Jan 17 00:02:06.667355 containerd[2150]: 2026-01-17 00:02:06.591 [INFO][6490] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" Jan 17 00:02:06.667355 containerd[2150]: 2026-01-17 00:02:06.591 [INFO][6490] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" Jan 17 00:02:06.667355 containerd[2150]: 2026-01-17 00:02:06.638 [INFO][6497] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" HandleID="k8s-pod-network.fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" Workload="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--wxtgw-eth0" Jan 17 00:02:06.667355 containerd[2150]: 2026-01-17 00:02:06.639 [INFO][6497] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:06.667355 containerd[2150]: 2026-01-17 00:02:06.639 [INFO][6497] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:06.667355 containerd[2150]: 2026-01-17 00:02:06.655 [WARNING][6497] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" HandleID="k8s-pod-network.fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" Workload="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--wxtgw-eth0" Jan 17 00:02:06.667355 containerd[2150]: 2026-01-17 00:02:06.656 [INFO][6497] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" HandleID="k8s-pod-network.fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" Workload="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--wxtgw-eth0" Jan 17 00:02:06.667355 containerd[2150]: 2026-01-17 00:02:06.660 [INFO][6497] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:06.667355 containerd[2150]: 2026-01-17 00:02:06.663 [INFO][6490] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" Jan 17 00:02:06.670494 containerd[2150]: time="2026-01-17T00:02:06.668612123Z" level=info msg="TearDown network for sandbox \"fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7\" successfully" Jan 17 00:02:06.670494 containerd[2150]: time="2026-01-17T00:02:06.668686691Z" level=info msg="StopPodSandbox for \"fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7\" returns successfully" Jan 17 00:02:06.670494 containerd[2150]: time="2026-01-17T00:02:06.669524723Z" level=info msg="RemovePodSandbox for \"fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7\"" Jan 17 00:02:06.670494 containerd[2150]: time="2026-01-17T00:02:06.669571547Z" level=info msg="Forcibly stopping sandbox \"fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7\"" Jan 17 00:02:06.823267 containerd[2150]: 2026-01-17 00:02:06.752 [WARNING][6511] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--wxtgw-eth0", GenerateName:"calico-apiserver-5fccc8c4dd-", Namespace:"calico-apiserver", SelfLink:"", UID:"78e9de4b-ea97-4b48-8f59-1242c0c3be02", ResourceVersion:"1414", Generation:0, CreationTimestamp:time.Date(2026, time.January, 17, 0, 0, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5fccc8c4dd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-23-167", ContainerID:"03e2949e4dbfaafc539259f5c2bb0e7eccdc6a915635dbb00e0b4ca6cf6753b3", Pod:"calico-apiserver-5fccc8c4dd-wxtgw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.38.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali5cb9c52c6f2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 17 00:02:06.823267 containerd[2150]: 2026-01-17 00:02:06.753 [INFO][6511] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" Jan 17 00:02:06.823267 containerd[2150]: 2026-01-17 00:02:06.753 [INFO][6511] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" iface="eth0" netns="" Jan 17 00:02:06.823267 containerd[2150]: 2026-01-17 00:02:06.753 [INFO][6511] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" Jan 17 00:02:06.823267 containerd[2150]: 2026-01-17 00:02:06.753 [INFO][6511] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" Jan 17 00:02:06.823267 containerd[2150]: 2026-01-17 00:02:06.800 [INFO][6518] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" HandleID="k8s-pod-network.fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" Workload="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--wxtgw-eth0" Jan 17 00:02:06.823267 containerd[2150]: 2026-01-17 00:02:06.801 [INFO][6518] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 17 00:02:06.823267 containerd[2150]: 2026-01-17 00:02:06.801 [INFO][6518] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 17 00:02:06.823267 containerd[2150]: 2026-01-17 00:02:06.814 [WARNING][6518] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" HandleID="k8s-pod-network.fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" Workload="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--wxtgw-eth0" Jan 17 00:02:06.823267 containerd[2150]: 2026-01-17 00:02:06.814 [INFO][6518] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" HandleID="k8s-pod-network.fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" Workload="ip--172--31--23--167-k8s-calico--apiserver--5fccc8c4dd--wxtgw-eth0" Jan 17 00:02:06.823267 containerd[2150]: 2026-01-17 00:02:06.817 [INFO][6518] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 17 00:02:06.823267 containerd[2150]: 2026-01-17 00:02:06.820 [INFO][6511] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7" Jan 17 00:02:06.824032 containerd[2150]: time="2026-01-17T00:02:06.823306391Z" level=info msg="TearDown network for sandbox \"fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7\" successfully" Jan 17 00:02:06.829986 containerd[2150]: time="2026-01-17T00:02:06.829888451Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 17 00:02:06.830176 containerd[2150]: time="2026-01-17T00:02:06.829989635Z" level=info msg="RemovePodSandbox \"fde178c7c2f95ccb624b493ce3a270df85a996ffc7d71cd5dbeb47b512a58ae7\" returns successfully" Jan 17 00:02:11.090764 systemd[1]: Started sshd@23-172.31.23.167:22-68.220.241.50:54568.service - OpenSSH per-connection server daemon (68.220.241.50:54568). Jan 17 00:02:11.667111 sshd[6526]: Accepted publickey for core from 68.220.241.50 port 54568 ssh2: RSA SHA256:sDAGzlGx6Edt6Gaawxj4VUM2Wx61RszNQ552ADQfH20 Jan 17 00:02:11.670045 sshd[6526]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 17 00:02:11.681896 systemd-logind[2113]: New session 24 of user core. Jan 17 00:02:11.690633 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 17 00:02:12.127620 kubelet[3417]: E0117 00:02:12.125111 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fccc8c4dd-wxtgw" podUID="78e9de4b-ea97-4b48-8f59-1242c0c3be02" Jan 17 00:02:12.279797 sshd[6526]: pam_unix(sshd:session): session closed for user core Jan 17 00:02:12.291680 systemd[1]: sshd@23-172.31.23.167:22-68.220.241.50:54568.service: Deactivated successfully. Jan 17 00:02:12.302837 systemd[1]: session-24.scope: Deactivated successfully. Jan 17 00:02:12.304956 systemd-logind[2113]: Session 24 logged out. Waiting for processes to exit. Jan 17 00:02:12.308031 systemd-logind[2113]: Removed session 24. Jan 17 00:02:13.132722 kubelet[3417]: E0117 00:02:13.130508 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74f49dc95d-4gk47" podUID="26583054-1df4-4aad-bd58-41f9694f0072" Jan 17 00:02:16.120507 kubelet[3417]: E0117 00:02:16.119110 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fccc8c4dd-j7cxw" podUID="5d919700-9b50-4829-84da-97568c603805" Jan 17 00:02:17.119098 kubelet[3417]: E0117 00:02:17.118898 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jjr5r" podUID="7326267c-1eb2-4759-b98f-e8dc2742ecd4" Jan 17 00:02:18.118104 kubelet[3417]: E0117 00:02:18.118048 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vmx9m" podUID="e6a5f346-af6c-40f3-8c32-a682e7923b77" Jan 17 00:02:24.117986 kubelet[3417]: E0117 00:02:24.117834 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fccc8c4dd-wxtgw" podUID="78e9de4b-ea97-4b48-8f59-1242c0c3be02" Jan 17 00:02:25.800692 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-4515b18dbd1736db010806a39eeca8c8a2d7ed37a6784a6da203c99a0fd25496-rootfs.mount: Deactivated successfully. Jan 17 00:02:25.810045 containerd[2150]: time="2026-01-17T00:02:25.809961606Z" level=info msg="shim disconnected" id=4515b18dbd1736db010806a39eeca8c8a2d7ed37a6784a6da203c99a0fd25496 namespace=k8s.io Jan 17 00:02:25.811015 containerd[2150]: time="2026-01-17T00:02:25.810751746Z" level=warning msg="cleaning up after shim disconnected" id=4515b18dbd1736db010806a39eeca8c8a2d7ed37a6784a6da203c99a0fd25496 namespace=k8s.io Jan 17 00:02:25.811015 containerd[2150]: time="2026-01-17T00:02:25.810787866Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:02:26.138816 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-f481c7329a34b9f0c212439f9dedd0acee9a902b2d602ab62b1cbd851539bbdd-rootfs.mount: Deactivated successfully. Jan 17 00:02:26.143922 containerd[2150]: time="2026-01-17T00:02:26.143590227Z" level=info msg="shim disconnected" id=f481c7329a34b9f0c212439f9dedd0acee9a902b2d602ab62b1cbd851539bbdd namespace=k8s.io Jan 17 00:02:26.143922 containerd[2150]: time="2026-01-17T00:02:26.143676771Z" level=warning msg="cleaning up after shim disconnected" id=f481c7329a34b9f0c212439f9dedd0acee9a902b2d602ab62b1cbd851539bbdd namespace=k8s.io Jan 17 00:02:26.143922 containerd[2150]: time="2026-01-17T00:02:26.143700579Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:02:26.275191 kubelet[3417]: I0117 00:02:26.274292 3417 scope.go:117] "RemoveContainer" containerID="4515b18dbd1736db010806a39eeca8c8a2d7ed37a6784a6da203c99a0fd25496" Jan 17 00:02:26.279211 containerd[2150]: time="2026-01-17T00:02:26.279151264Z" level=info msg="CreateContainer within sandbox \"9e7078619ae72f64ea683d7d33979867adeb4acd4a534a2e566afea3a7229d29\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 17 00:02:26.282116 kubelet[3417]: I0117 00:02:26.281696 3417 scope.go:117] "RemoveContainer" containerID="f481c7329a34b9f0c212439f9dedd0acee9a902b2d602ab62b1cbd851539bbdd" Jan 17 00:02:26.285475 containerd[2150]: time="2026-01-17T00:02:26.285381136Z" level=info msg="CreateContainer within sandbox \"faea05b96b5961e46554f84b5d818e2140c1e1e8edf82d3490c6a419c6e478ea\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 17 00:02:26.288306 kubelet[3417]: E0117 00:02:26.287911 3417 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.167:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-167?timeout=10s\": context deadline exceeded" Jan 17 00:02:26.308006 containerd[2150]: time="2026-01-17T00:02:26.307843576Z" level=info msg="CreateContainer within sandbox \"9e7078619ae72f64ea683d7d33979867adeb4acd4a534a2e566afea3a7229d29\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"0d7b82d184ed82d3b29ce43ecab3a9c296ef74008778037c0e9d44b17a908ee6\"" Jan 17 00:02:26.314472 containerd[2150]: time="2026-01-17T00:02:26.311487856Z" level=info msg="StartContainer for \"0d7b82d184ed82d3b29ce43ecab3a9c296ef74008778037c0e9d44b17a908ee6\"" Jan 17 00:02:26.313777 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2064637311.mount: Deactivated successfully. Jan 17 00:02:26.330217 containerd[2150]: time="2026-01-17T00:02:26.330159532Z" level=info msg="CreateContainer within sandbox \"faea05b96b5961e46554f84b5d818e2140c1e1e8edf82d3490c6a419c6e478ea\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"a42d7f885b70dab72449985fad79db7d8ba24b98837ed96b4831f046e84ed532\"" Jan 17 00:02:26.332929 containerd[2150]: time="2026-01-17T00:02:26.332882608Z" level=info msg="StartContainer for \"a42d7f885b70dab72449985fad79db7d8ba24b98837ed96b4831f046e84ed532\"" Jan 17 00:02:26.487425 containerd[2150]: time="2026-01-17T00:02:26.485410217Z" level=info msg="StartContainer for \"0d7b82d184ed82d3b29ce43ecab3a9c296ef74008778037c0e9d44b17a908ee6\" returns successfully" Jan 17 00:02:26.503028 containerd[2150]: time="2026-01-17T00:02:26.500552261Z" level=info msg="StartContainer for \"a42d7f885b70dab72449985fad79db7d8ba24b98837ed96b4831f046e84ed532\" returns successfully" Jan 17 00:02:28.117595 kubelet[3417]: E0117 00:02:28.117532 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74f49dc95d-4gk47" podUID="26583054-1df4-4aad-bd58-41f9694f0072" Jan 17 00:02:29.120215 kubelet[3417]: E0117 00:02:29.120120 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jjr5r" podUID="7326267c-1eb2-4759-b98f-e8dc2742ecd4" Jan 17 00:02:30.117987 kubelet[3417]: E0117 00:02:30.117811 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fccc8c4dd-j7cxw" podUID="5d919700-9b50-4829-84da-97568c603805" Jan 17 00:02:30.117987 kubelet[3417]: E0117 00:02:30.117932 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vmx9m" podUID="e6a5f346-af6c-40f3-8c32-a682e7923b77" Jan 17 00:02:31.061305 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1f9391ad6bb9398bef42f5fe0cf5903b3ea3709c2eb2f022de6447338786300e-rootfs.mount: Deactivated successfully. Jan 17 00:02:31.074782 containerd[2150]: time="2026-01-17T00:02:31.074695004Z" level=info msg="shim disconnected" id=1f9391ad6bb9398bef42f5fe0cf5903b3ea3709c2eb2f022de6447338786300e namespace=k8s.io Jan 17 00:02:31.075604 containerd[2150]: time="2026-01-17T00:02:31.075507440Z" level=warning msg="cleaning up after shim disconnected" id=1f9391ad6bb9398bef42f5fe0cf5903b3ea3709c2eb2f022de6447338786300e namespace=k8s.io Jan 17 00:02:31.075604 containerd[2150]: time="2026-01-17T00:02:31.075559388Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:02:31.306162 kubelet[3417]: I0117 00:02:31.306104 3417 scope.go:117] "RemoveContainer" containerID="1f9391ad6bb9398bef42f5fe0cf5903b3ea3709c2eb2f022de6447338786300e" Jan 17 00:02:31.309246 containerd[2150]: time="2026-01-17T00:02:31.309187677Z" level=info msg="CreateContainer within sandbox \"a5766b3578598370c2dc652f8b30c4b376a96ed9551e33c28c84b96a2517c45e\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 17 00:02:31.336987 containerd[2150]: time="2026-01-17T00:02:31.336587625Z" level=info msg="CreateContainer within sandbox \"a5766b3578598370c2dc652f8b30c4b376a96ed9551e33c28c84b96a2517c45e\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"ff0b95b515b70bd4f94411781fbfb199ae5a41c55d952eecb7ae5b67faa64db6\"" Jan 17 00:02:31.337550 containerd[2150]: time="2026-01-17T00:02:31.337457997Z" level=info msg="StartContainer for \"ff0b95b515b70bd4f94411781fbfb199ae5a41c55d952eecb7ae5b67faa64db6\"" Jan 17 00:02:31.465266 containerd[2150]: time="2026-01-17T00:02:31.465086194Z" level=info msg="StartContainer for \"ff0b95b515b70bd4f94411781fbfb199ae5a41c55d952eecb7ae5b67faa64db6\" returns successfully" Jan 17 00:02:36.290910 kubelet[3417]: E0117 00:02:36.289211 3417 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.167:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-167?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 17 00:02:37.118750 kubelet[3417]: E0117 00:02:37.118674 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fccc8c4dd-wxtgw" podUID="78e9de4b-ea97-4b48-8f59-1242c0c3be02" Jan 17 00:02:38.013798 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d7b82d184ed82d3b29ce43ecab3a9c296ef74008778037c0e9d44b17a908ee6-rootfs.mount: Deactivated successfully. Jan 17 00:02:38.024990 containerd[2150]: time="2026-01-17T00:02:38.024710582Z" level=info msg="shim disconnected" id=0d7b82d184ed82d3b29ce43ecab3a9c296ef74008778037c0e9d44b17a908ee6 namespace=k8s.io Jan 17 00:02:38.024990 containerd[2150]: time="2026-01-17T00:02:38.024804710Z" level=warning msg="cleaning up after shim disconnected" id=0d7b82d184ed82d3b29ce43ecab3a9c296ef74008778037c0e9d44b17a908ee6 namespace=k8s.io Jan 17 00:02:38.024990 containerd[2150]: time="2026-01-17T00:02:38.024825554Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 17 00:02:38.332505 kubelet[3417]: I0117 00:02:38.331624 3417 scope.go:117] "RemoveContainer" containerID="4515b18dbd1736db010806a39eeca8c8a2d7ed37a6784a6da203c99a0fd25496" Jan 17 00:02:38.332505 kubelet[3417]: I0117 00:02:38.332089 3417 scope.go:117] "RemoveContainer" containerID="0d7b82d184ed82d3b29ce43ecab3a9c296ef74008778037c0e9d44b17a908ee6" Jan 17 00:02:38.332505 kubelet[3417]: E0117 00:02:38.332311 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-mb6cm_tigera-operator(11c64c0f-b3de-476c-b88d-e4b12618deab)\"" pod="tigera-operator/tigera-operator-7dcd859c48-mb6cm" podUID="11c64c0f-b3de-476c-b88d-e4b12618deab" Jan 17 00:02:38.335046 containerd[2150]: time="2026-01-17T00:02:38.334988848Z" level=info msg="RemoveContainer for \"4515b18dbd1736db010806a39eeca8c8a2d7ed37a6784a6da203c99a0fd25496\"" Jan 17 00:02:38.341832 containerd[2150]: time="2026-01-17T00:02:38.341752936Z" level=info msg="RemoveContainer for \"4515b18dbd1736db010806a39eeca8c8a2d7ed37a6784a6da203c99a0fd25496\" returns successfully" Jan 17 00:02:42.117547 kubelet[3417]: E0117 00:02:42.117408 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-vmx9m" podUID="e6a5f346-af6c-40f3-8c32-a682e7923b77" Jan 17 00:02:42.119514 containerd[2150]: time="2026-01-17T00:02:42.119044327Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 17 00:02:42.384061 containerd[2150]: time="2026-01-17T00:02:42.383884376Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:02:42.386201 containerd[2150]: time="2026-01-17T00:02:42.386127200Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 17 00:02:42.386344 containerd[2150]: time="2026-01-17T00:02:42.386281844Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 17 00:02:42.386961 kubelet[3417]: E0117 00:02:42.386602 3417 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:02:42.386961 kubelet[3417]: E0117 00:02:42.386668 3417 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 17 00:02:42.386961 kubelet[3417]: E0117 00:02:42.386861 3417 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-2snwq,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-74f49dc95d-4gk47_calico-system(26583054-1df4-4aad-bd58-41f9694f0072): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 17 00:02:42.388152 kubelet[3417]: E0117 00:02:42.388075 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-74f49dc95d-4gk47" podUID="26583054-1df4-4aad-bd58-41f9694f0072" Jan 17 00:02:44.118766 kubelet[3417]: E0117 00:02:44.118672 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-jjr5r" podUID="7326267c-1eb2-4759-b98f-e8dc2742ecd4" Jan 17 00:02:45.119060 containerd[2150]: time="2026-01-17T00:02:45.118681018Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 17 00:02:45.389829 containerd[2150]: time="2026-01-17T00:02:45.389655539Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 17 00:02:45.391931 containerd[2150]: time="2026-01-17T00:02:45.391772447Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 17 00:02:45.391931 containerd[2150]: time="2026-01-17T00:02:45.391851311Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 17 00:02:45.392129 kubelet[3417]: E0117 00:02:45.392074 3417 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:02:45.392789 kubelet[3417]: E0117 00:02:45.392138 3417 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 17 00:02:45.392789 kubelet[3417]: E0117 00:02:45.392331 3417 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-xkkn2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5fccc8c4dd-j7cxw_calico-apiserver(5d919700-9b50-4829-84da-97568c603805): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 17 00:02:45.393619 kubelet[3417]: E0117 00:02:45.393556 3417 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5fccc8c4dd-j7cxw" podUID="5d919700-9b50-4829-84da-97568c603805" Jan 17 00:02:46.289599 kubelet[3417]: E0117 00:02:46.289494 3417 controller.go:195] "Failed to update lease" err="Put \"https://172.31.23.167:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-23-167?timeout=10s\": context deadline exceeded"