Jan 23 23:54:02.259042 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 23 23:54:02.259090 kernel: Linux version 6.6.119-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 13.3.1_p20240614 p17) 13.3.1 20240614, GNU ld (Gentoo 2.42 p3) 2.42.0) #1 SMP PREEMPT Fri Jan 23 22:26:47 -00 2026 Jan 23 23:54:02.259115 kernel: KASLR disabled due to lack of seed Jan 23 23:54:02.259132 kernel: efi: EFI v2.7 by EDK II Jan 23 23:54:02.259149 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7b001a98 MEMRESERVE=0x7852ee18 Jan 23 23:54:02.259164 kernel: ACPI: Early table checksum verification disabled Jan 23 23:54:02.259182 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 23 23:54:02.259198 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 23 23:54:02.259214 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 23 23:54:02.259230 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 23 23:54:02.259251 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 23 23:54:02.259268 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 23 23:54:02.259283 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 23 23:54:02.259299 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 23 23:54:02.259319 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 23 23:54:02.259340 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 23 23:54:02.259357 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 23 23:54:02.259374 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 23 23:54:02.259392 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 23 23:54:02.259409 kernel: printk: bootconsole [uart0] enabled Jan 23 23:54:02.259426 kernel: NUMA: Failed to initialise from firmware Jan 23 23:54:02.259443 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 23:54:02.259460 kernel: NUMA: NODE_DATA [mem 0x4b583f800-0x4b5844fff] Jan 23 23:54:02.259477 kernel: Zone ranges: Jan 23 23:54:02.259494 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 23 23:54:02.259510 kernel: DMA32 empty Jan 23 23:54:02.259531 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 23 23:54:02.259548 kernel: Movable zone start for each node Jan 23 23:54:02.259564 kernel: Early memory node ranges Jan 23 23:54:02.259580 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 23 23:54:02.259597 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 23 23:54:02.260879 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 23 23:54:02.260911 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 23 23:54:02.260929 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 23 23:54:02.260948 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 23 23:54:02.260966 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 23 23:54:02.260983 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 23 23:54:02.261000 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 23:54:02.261029 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 23 23:54:02.261047 kernel: psci: probing for conduit method from ACPI. Jan 23 23:54:02.261073 kernel: psci: PSCIv1.0 detected in firmware. Jan 23 23:54:02.261091 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 23:54:02.261109 kernel: psci: Trusted OS migration not required Jan 23 23:54:02.261132 kernel: psci: SMC Calling Convention v1.1 Jan 23 23:54:02.261150 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jan 23 23:54:02.261168 kernel: percpu: Embedded 30 pages/cpu s85672 r8192 d29016 u122880 Jan 23 23:54:02.261186 kernel: pcpu-alloc: s85672 r8192 d29016 u122880 alloc=30*4096 Jan 23 23:54:02.261204 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 23:54:02.261222 kernel: Detected PIPT I-cache on CPU0 Jan 23 23:54:02.261240 kernel: CPU features: detected: GIC system register CPU interface Jan 23 23:54:02.261258 kernel: CPU features: detected: Spectre-v2 Jan 23 23:54:02.261275 kernel: CPU features: detected: Spectre-v3a Jan 23 23:54:02.261293 kernel: CPU features: detected: Spectre-BHB Jan 23 23:54:02.261311 kernel: CPU features: detected: ARM erratum 1742098 Jan 23 23:54:02.261334 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 23 23:54:02.261353 kernel: alternatives: applying boot alternatives Jan 23 23:54:02.261373 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:54:02.261392 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 23:54:02.261410 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 23:54:02.261427 kernel: Fallback order for Node 0: 0 Jan 23 23:54:02.261445 kernel: Built 1 zonelists, mobility grouping on. Total pages: 991872 Jan 23 23:54:02.261463 kernel: Policy zone: Normal Jan 23 23:54:02.261480 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 23:54:02.261497 kernel: software IO TLB: area num 2. Jan 23 23:54:02.261515 kernel: software IO TLB: mapped [mem 0x000000007c000000-0x0000000080000000] (64MB) Jan 23 23:54:02.261542 kernel: Memory: 3820096K/4030464K available (10304K kernel code, 2180K rwdata, 8112K rodata, 39424K init, 897K bss, 210368K reserved, 0K cma-reserved) Jan 23 23:54:02.261561 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 23:54:02.261579 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 23:54:02.261598 kernel: rcu: RCU event tracing is enabled. Jan 23 23:54:02.263708 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 23:54:02.263741 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 23:54:02.263760 kernel: Tracing variant of Tasks RCU enabled. Jan 23 23:54:02.263779 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 23:54:02.263797 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 23:54:02.263815 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 23:54:02.263832 kernel: GICv3: 96 SPIs implemented Jan 23 23:54:02.263863 kernel: GICv3: 0 Extended SPIs implemented Jan 23 23:54:02.263882 kernel: Root IRQ handler: gic_handle_irq Jan 23 23:54:02.263899 kernel: GICv3: GICv3 features: 16 PPIs Jan 23 23:54:02.263918 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 23 23:54:02.263936 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 23 23:54:02.263954 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000b0000 (indirect, esz 8, psz 64K, shr 1) Jan 23 23:54:02.263974 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @4000c0000 (flat, esz 8, psz 64K, shr 1) Jan 23 23:54:02.263992 kernel: GICv3: using LPI property table @0x00000004000d0000 Jan 23 23:54:02.264011 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 23 23:54:02.264028 kernel: GICv3: CPU0: using allocated LPI pending table @0x00000004000e0000 Jan 23 23:54:02.264046 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 23:54:02.264063 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 23 23:54:02.264088 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 23 23:54:02.264106 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 23 23:54:02.264124 kernel: Console: colour dummy device 80x25 Jan 23 23:54:02.264142 kernel: printk: console [tty1] enabled Jan 23 23:54:02.264160 kernel: ACPI: Core revision 20230628 Jan 23 23:54:02.264179 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 23 23:54:02.264198 kernel: pid_max: default: 32768 minimum: 301 Jan 23 23:54:02.264218 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,integrity Jan 23 23:54:02.264236 kernel: landlock: Up and running. Jan 23 23:54:02.264261 kernel: SELinux: Initializing. Jan 23 23:54:02.264280 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:54:02.264298 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 23:54:02.264316 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:54:02.264334 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 23:54:02.264353 kernel: rcu: Hierarchical SRCU implementation. Jan 23 23:54:02.264371 kernel: rcu: Max phase no-delay instances is 400. Jan 23 23:54:02.264389 kernel: Platform MSI: ITS@0x10080000 domain created Jan 23 23:54:02.264407 kernel: PCI/MSI: ITS@0x10080000 domain created Jan 23 23:54:02.264430 kernel: Remapping and enabling EFI services. Jan 23 23:54:02.264449 kernel: smp: Bringing up secondary CPUs ... Jan 23 23:54:02.264467 kernel: Detected PIPT I-cache on CPU1 Jan 23 23:54:02.264485 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 23 23:54:02.264503 kernel: GICv3: CPU1: using allocated LPI pending table @0x00000004000f0000 Jan 23 23:54:02.264521 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 23 23:54:02.264539 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 23:54:02.264557 kernel: SMP: Total of 2 processors activated. Jan 23 23:54:02.264575 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 23:54:02.264598 kernel: CPU features: detected: 32-bit EL1 Support Jan 23 23:54:02.264670 kernel: CPU features: detected: CRC32 instructions Jan 23 23:54:02.264694 kernel: CPU: All CPU(s) started at EL1 Jan 23 23:54:02.264731 kernel: alternatives: applying system-wide alternatives Jan 23 23:54:02.264755 kernel: devtmpfs: initialized Jan 23 23:54:02.264774 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 23:54:02.264793 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 23:54:02.264812 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 23:54:02.264831 kernel: SMBIOS 3.0.0 present. Jan 23 23:54:02.264855 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 23 23:54:02.264874 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 23:54:02.264893 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 23:54:02.264913 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 23:54:02.264932 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 23:54:02.264951 kernel: audit: initializing netlink subsys (disabled) Jan 23 23:54:02.264970 kernel: audit: type=2000 audit(0.297:1): state=initialized audit_enabled=0 res=1 Jan 23 23:54:02.264988 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 23:54:02.265012 kernel: cpuidle: using governor menu Jan 23 23:54:02.265032 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 23:54:02.265050 kernel: ASID allocator initialised with 65536 entries Jan 23 23:54:02.265070 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 23:54:02.265090 kernel: Serial: AMBA PL011 UART driver Jan 23 23:54:02.265109 kernel: Modules: 17488 pages in range for non-PLT usage Jan 23 23:54:02.265131 kernel: Modules: 509008 pages in range for PLT usage Jan 23 23:54:02.265150 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 23:54:02.265169 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 23:54:02.265193 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 23:54:02.265213 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 23:54:02.265232 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 23:54:02.265250 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 23:54:02.265269 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 23:54:02.265288 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 23:54:02.265306 kernel: ACPI: Added _OSI(Module Device) Jan 23 23:54:02.265325 kernel: ACPI: Added _OSI(Processor Device) Jan 23 23:54:02.265343 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 23:54:02.265367 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 23:54:02.265387 kernel: ACPI: Interpreter enabled Jan 23 23:54:02.265405 kernel: ACPI: Using GIC for interrupt routing Jan 23 23:54:02.265424 kernel: ACPI: MCFG table detected, 1 entries Jan 23 23:54:02.265443 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Jan 23 23:54:02.265865 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 23:54:02.266122 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 23 23:54:02.266372 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 23 23:54:02.268789 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Jan 23 23:54:02.269139 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Jan 23 23:54:02.269178 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 23 23:54:02.269200 kernel: acpiphp: Slot [1] registered Jan 23 23:54:02.269222 kernel: acpiphp: Slot [2] registered Jan 23 23:54:02.269242 kernel: acpiphp: Slot [3] registered Jan 23 23:54:02.269262 kernel: acpiphp: Slot [4] registered Jan 23 23:54:02.269281 kernel: acpiphp: Slot [5] registered Jan 23 23:54:02.269315 kernel: acpiphp: Slot [6] registered Jan 23 23:54:02.269335 kernel: acpiphp: Slot [7] registered Jan 23 23:54:02.269353 kernel: acpiphp: Slot [8] registered Jan 23 23:54:02.269372 kernel: acpiphp: Slot [9] registered Jan 23 23:54:02.269392 kernel: acpiphp: Slot [10] registered Jan 23 23:54:02.269411 kernel: acpiphp: Slot [11] registered Jan 23 23:54:02.269431 kernel: acpiphp: Slot [12] registered Jan 23 23:54:02.269450 kernel: acpiphp: Slot [13] registered Jan 23 23:54:02.269471 kernel: acpiphp: Slot [14] registered Jan 23 23:54:02.269490 kernel: acpiphp: Slot [15] registered Jan 23 23:54:02.269517 kernel: acpiphp: Slot [16] registered Jan 23 23:54:02.269537 kernel: acpiphp: Slot [17] registered Jan 23 23:54:02.269558 kernel: acpiphp: Slot [18] registered Jan 23 23:54:02.269577 kernel: acpiphp: Slot [19] registered Jan 23 23:54:02.269598 kernel: acpiphp: Slot [20] registered Jan 23 23:54:02.269676 kernel: acpiphp: Slot [21] registered Jan 23 23:54:02.269701 kernel: acpiphp: Slot [22] registered Jan 23 23:54:02.269720 kernel: acpiphp: Slot [23] registered Jan 23 23:54:02.269742 kernel: acpiphp: Slot [24] registered Jan 23 23:54:02.269771 kernel: acpiphp: Slot [25] registered Jan 23 23:54:02.269791 kernel: acpiphp: Slot [26] registered Jan 23 23:54:02.269809 kernel: acpiphp: Slot [27] registered Jan 23 23:54:02.269828 kernel: acpiphp: Slot [28] registered Jan 23 23:54:02.269847 kernel: acpiphp: Slot [29] registered Jan 23 23:54:02.269865 kernel: acpiphp: Slot [30] registered Jan 23 23:54:02.269884 kernel: acpiphp: Slot [31] registered Jan 23 23:54:02.269902 kernel: PCI host bridge to bus 0000:00 Jan 23 23:54:02.270209 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 23 23:54:02.270470 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 23 23:54:02.272837 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 23 23:54:02.273063 kernel: pci_bus 0000:00: root bus resource [bus 00] Jan 23 23:54:02.273320 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 Jan 23 23:54:02.273591 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 Jan 23 23:54:02.273883 kernel: pci 0000:00:01.0: reg 0x10: [mem 0x80118000-0x80118fff] Jan 23 23:54:02.274133 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 Jan 23 23:54:02.274367 kernel: pci 0000:00:04.0: reg 0x10: [mem 0x80114000-0x80117fff] Jan 23 23:54:02.276690 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 23:54:02.277044 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 Jan 23 23:54:02.277299 kernel: pci 0000:00:05.0: reg 0x10: [mem 0x80110000-0x80113fff] Jan 23 23:54:02.277520 kernel: pci 0000:00:05.0: reg 0x18: [mem 0x80000000-0x800fffff pref] Jan 23 23:54:02.280870 kernel: pci 0000:00:05.0: reg 0x20: [mem 0x80100000-0x8010ffff] Jan 23 23:54:02.281124 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 23:54:02.281319 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 23 23:54:02.281503 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 23 23:54:02.282822 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 23 23:54:02.282869 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 23 23:54:02.282890 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 23 23:54:02.282909 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 23 23:54:02.282929 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 23 23:54:02.282959 kernel: iommu: Default domain type: Translated Jan 23 23:54:02.282978 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 23:54:02.282996 kernel: efivars: Registered efivars operations Jan 23 23:54:02.283015 kernel: vgaarb: loaded Jan 23 23:54:02.283034 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 23:54:02.283053 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 23:54:02.283071 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 23:54:02.283090 kernel: pnp: PnP ACPI init Jan 23 23:54:02.283319 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 23 23:54:02.283356 kernel: pnp: PnP ACPI: found 1 devices Jan 23 23:54:02.283376 kernel: NET: Registered PF_INET protocol family Jan 23 23:54:02.283395 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 23:54:02.283415 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 23:54:02.283435 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 23:54:02.283455 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 23:54:02.283474 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 23:54:02.283494 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 23:54:02.283518 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:54:02.283538 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 23:54:02.283556 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 23:54:02.283575 kernel: PCI: CLS 0 bytes, default 64 Jan 23 23:54:02.283593 kernel: kvm [1]: HYP mode not available Jan 23 23:54:02.283663 kernel: Initialise system trusted keyrings Jan 23 23:54:02.283686 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 23:54:02.283705 kernel: Key type asymmetric registered Jan 23 23:54:02.283725 kernel: Asymmetric key parser 'x509' registered Jan 23 23:54:02.283754 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 250) Jan 23 23:54:02.283775 kernel: io scheduler mq-deadline registered Jan 23 23:54:02.283793 kernel: io scheduler kyber registered Jan 23 23:54:02.283812 kernel: io scheduler bfq registered Jan 23 23:54:02.284062 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 23 23:54:02.284092 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 23 23:54:02.284112 kernel: ACPI: button: Power Button [PWRB] Jan 23 23:54:02.284131 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 23 23:54:02.284149 kernel: ACPI: button: Sleep Button [SLPB] Jan 23 23:54:02.284176 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 23:54:02.284196 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 23 23:54:02.284411 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 23 23:54:02.284439 kernel: printk: console [ttyS0] disabled Jan 23 23:54:02.284458 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 23 23:54:02.284477 kernel: printk: console [ttyS0] enabled Jan 23 23:54:02.284496 kernel: printk: bootconsole [uart0] disabled Jan 23 23:54:02.284514 kernel: thunder_xcv, ver 1.0 Jan 23 23:54:02.284533 kernel: thunder_bgx, ver 1.0 Jan 23 23:54:02.284557 kernel: nicpf, ver 1.0 Jan 23 23:54:02.284575 kernel: nicvf, ver 1.0 Jan 23 23:54:02.288907 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 23:54:02.289130 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T23:54:01 UTC (1769212441) Jan 23 23:54:02.289157 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 23:54:02.289177 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 counters available Jan 23 23:54:02.289197 kernel: watchdog: Delayed init of the lockup detector failed: -19 Jan 23 23:54:02.289215 kernel: watchdog: Hard watchdog permanently disabled Jan 23 23:54:02.289245 kernel: NET: Registered PF_INET6 protocol family Jan 23 23:54:02.289265 kernel: Segment Routing with IPv6 Jan 23 23:54:02.289283 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 23:54:02.289302 kernel: NET: Registered PF_PACKET protocol family Jan 23 23:54:02.289320 kernel: Key type dns_resolver registered Jan 23 23:54:02.289339 kernel: registered taskstats version 1 Jan 23 23:54:02.289357 kernel: Loading compiled-in X.509 certificates Jan 23 23:54:02.289376 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.6.119-flatcar: e1080b1efd8e2d5332b6814128fba42796535445' Jan 23 23:54:02.289395 kernel: Key type .fscrypt registered Jan 23 23:54:02.289419 kernel: Key type fscrypt-provisioning registered Jan 23 23:54:02.289437 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 23:54:02.289456 kernel: ima: Allocated hash algorithm: sha1 Jan 23 23:54:02.289475 kernel: ima: No architecture policies found Jan 23 23:54:02.289493 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 23:54:02.289513 kernel: clk: Disabling unused clocks Jan 23 23:54:02.289533 kernel: Freeing unused kernel memory: 39424K Jan 23 23:54:02.289553 kernel: Run /init as init process Jan 23 23:54:02.289572 kernel: with arguments: Jan 23 23:54:02.289599 kernel: /init Jan 23 23:54:02.289682 kernel: with environment: Jan 23 23:54:02.289703 kernel: HOME=/ Jan 23 23:54:02.289722 kernel: TERM=linux Jan 23 23:54:02.289748 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:54:02.289772 systemd[1]: Detected virtualization amazon. Jan 23 23:54:02.289794 systemd[1]: Detected architecture arm64. Jan 23 23:54:02.289814 systemd[1]: Running in initrd. Jan 23 23:54:02.289847 systemd[1]: No hostname configured, using default hostname. Jan 23 23:54:02.289868 systemd[1]: Hostname set to . Jan 23 23:54:02.289889 systemd[1]: Initializing machine ID from VM UUID. Jan 23 23:54:02.289910 systemd[1]: Queued start job for default target initrd.target. Jan 23 23:54:02.289930 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:54:02.289952 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:54:02.289974 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 23:54:02.289995 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:54:02.290022 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 23:54:02.290044 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 23:54:02.290068 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 23:54:02.290089 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 23:54:02.290110 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:54:02.290130 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:54:02.290155 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:54:02.290176 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:54:02.290196 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:54:02.290216 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:54:02.290236 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:54:02.290256 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:54:02.290276 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 23:54:02.290296 systemd[1]: Listening on systemd-journald.socket - Journal Socket. Jan 23 23:54:02.290316 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:54:02.290341 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:54:02.290386 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:54:02.290409 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:54:02.290430 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 23:54:02.290450 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:54:02.290470 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 23:54:02.290490 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 23:54:02.290510 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:54:02.290531 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:54:02.290558 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:54:02.290578 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 23:54:02.290598 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:54:02.290911 systemd-journald[251]: Collecting audit messages is disabled. Jan 23 23:54:02.290967 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 23:54:02.290989 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:54:02.291010 systemd-journald[251]: Journal started Jan 23 23:54:02.291052 systemd-journald[251]: Runtime Journal (/run/log/journal/ec2cd8686b8918ba2058b5f0d840e52d) is 8.0M, max 75.3M, 67.3M free. Jan 23 23:54:02.297765 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 23:54:02.263186 systemd-modules-load[252]: Inserted module 'overlay' Jan 23 23:54:02.305301 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:54:02.306433 kernel: Bridge firewalling registered Jan 23 23:54:02.304056 systemd-modules-load[252]: Inserted module 'br_netfilter' Jan 23 23:54:02.317979 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:54:02.319039 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:54:02.322260 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:54:02.341042 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:54:02.353080 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:54:02.359920 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:54:02.371916 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:54:02.413223 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:54:02.419012 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:54:02.423804 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:54:02.424805 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:54:02.443925 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 23:54:02.454928 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:54:02.476685 dracut-cmdline[288]: dracut-dracut-053 Jan 23 23:54:02.484292 dracut-cmdline[288]: Using kernel command line parameters: rd.driver.pre=btrfs BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=a01f25d0714a86cf8b897276230b4ac71c04b1d69bd03a1f6d2ef96f59ef0f09 Jan 23 23:54:02.559497 systemd-resolved[289]: Positive Trust Anchors: Jan 23 23:54:02.559541 systemd-resolved[289]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:54:02.559640 systemd-resolved[289]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:54:02.636655 kernel: SCSI subsystem initialized Jan 23 23:54:02.643647 kernel: Loading iSCSI transport class v2.0-870. Jan 23 23:54:02.657664 kernel: iscsi: registered transport (tcp) Jan 23 23:54:02.680893 kernel: iscsi: registered transport (qla4xxx) Jan 23 23:54:02.680968 kernel: QLogic iSCSI HBA Driver Jan 23 23:54:02.785672 kernel: random: crng init done Jan 23 23:54:02.786317 systemd-resolved[289]: Defaulting to hostname 'linux'. Jan 23 23:54:02.793314 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:54:02.799267 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:54:02.824478 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 23:54:02.838022 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 23:54:02.875663 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 23:54:02.875739 kernel: device-mapper: uevent: version 1.0.3 Jan 23 23:54:02.878634 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@redhat.com Jan 23 23:54:02.945679 kernel: raid6: neonx8 gen() 6678 MB/s Jan 23 23:54:02.962670 kernel: raid6: neonx4 gen() 6495 MB/s Jan 23 23:54:02.979673 kernel: raid6: neonx2 gen() 5415 MB/s Jan 23 23:54:02.996671 kernel: raid6: neonx1 gen() 3922 MB/s Jan 23 23:54:03.013677 kernel: raid6: int64x8 gen() 3798 MB/s Jan 23 23:54:03.030664 kernel: raid6: int64x4 gen() 3621 MB/s Jan 23 23:54:03.047666 kernel: raid6: int64x2 gen() 3561 MB/s Jan 23 23:54:03.065807 kernel: raid6: int64x1 gen() 2712 MB/s Jan 23 23:54:03.065901 kernel: raid6: using algorithm neonx8 gen() 6678 MB/s Jan 23 23:54:03.085169 kernel: raid6: .... xor() 4831 MB/s, rmw enabled Jan 23 23:54:03.085258 kernel: raid6: using neon recovery algorithm Jan 23 23:54:03.095565 kernel: xor: measuring software checksum speed Jan 23 23:54:03.095672 kernel: 8regs : 10381 MB/sec Jan 23 23:54:03.098375 kernel: 32regs : 11150 MB/sec Jan 23 23:54:03.098435 kernel: arm64_neon : 9323 MB/sec Jan 23 23:54:03.098463 kernel: xor: using function: 32regs (11150 MB/sec) Jan 23 23:54:03.186674 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 23:54:03.209177 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:54:03.223965 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:54:03.272948 systemd-udevd[471]: Using default interface naming scheme 'v255'. Jan 23 23:54:03.281396 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:54:03.299911 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 23:54:03.331791 dracut-pre-trigger[476]: rd.md=0: removing MD RAID activation Jan 23 23:54:03.390418 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:54:03.399113 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:54:03.523218 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:54:03.540281 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 23:54:03.584540 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 23:54:03.589859 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:54:03.592749 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:54:03.595701 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:54:03.614051 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 23:54:03.653682 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:54:03.748447 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 23 23:54:03.748525 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 23 23:54:03.755118 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 23 23:54:03.758274 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 23 23:54:03.755010 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:54:03.755145 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:54:03.758402 systemd[1]: Stopping dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:54:03.761054 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:54:03.802602 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:25:71:56:d0:1b Jan 23 23:54:03.763349 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:54:03.772413 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:54:03.787878 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:54:03.798959 (udev-worker)[521]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:54:03.833658 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 23 23:54:03.833729 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 23 23:54:03.847586 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 23 23:54:03.844785 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:54:03.858061 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 23:54:03.871442 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 23:54:03.871513 kernel: GPT:9289727 != 33554431 Jan 23 23:54:03.871539 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 23:54:03.872407 kernel: GPT:9289727 != 33554431 Jan 23 23:54:03.873656 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 23:54:03.874648 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:54:03.895549 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:54:03.991670 kernel: BTRFS: device label OEM devid 1 transid 9 /dev/nvme0n1p6 scanned by (udev-worker) (530) Jan 23 23:54:04.009660 kernel: BTRFS: device fsid 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe devid 1 transid 34 /dev/nvme0n1p3 scanned by (udev-worker) (545) Jan 23 23:54:04.097028 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 23 23:54:04.142813 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 23 23:54:04.162310 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 23:54:04.177170 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 23 23:54:04.184876 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 23 23:54:04.203990 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 23:54:04.219217 disk-uuid[663]: Primary Header is updated. Jan 23 23:54:04.219217 disk-uuid[663]: Secondary Entries is updated. Jan 23 23:54:04.219217 disk-uuid[663]: Secondary Header is updated. Jan 23 23:54:04.240672 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:54:04.252659 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:54:04.261684 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:54:05.264734 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 23:54:05.265684 disk-uuid[664]: The operation has completed successfully. Jan 23 23:54:05.444798 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 23:54:05.447389 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 23:54:05.495868 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 23:54:05.506834 sh[1008]: Success Jan 23 23:54:05.531788 kernel: device-mapper: verity: sha256 using implementation "sha256-ce" Jan 23 23:54:05.639824 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 23:54:05.652858 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 23:54:05.662726 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 23:54:05.702419 kernel: BTRFS info (device dm-0): first mount of filesystem 6d31cc5b-4da2-4320-9991-d4bd2fc0f7fe Jan 23 23:54:05.702518 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:54:05.702549 kernel: BTRFS warning (device dm-0): 'nologreplay' is deprecated, use 'rescue=nologreplay' instead Jan 23 23:54:05.704530 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 23:54:05.706053 kernel: BTRFS info (device dm-0): using free space tree Jan 23 23:54:05.826658 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 23:54:05.840425 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 23:54:05.845035 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 23:54:05.855041 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 23:54:05.861932 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 23:54:05.897652 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:54:05.897729 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:54:05.899584 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 23 23:54:05.922791 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:54:05.937459 systemd[1]: mnt-oem.mount: Deactivated successfully. Jan 23 23:54:05.941697 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:54:05.952840 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 23:54:05.965096 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 23:54:06.075854 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:54:06.089025 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:54:06.146133 systemd-networkd[1201]: lo: Link UP Jan 23 23:54:06.146153 systemd-networkd[1201]: lo: Gained carrier Jan 23 23:54:06.151493 systemd-networkd[1201]: Enumeration completed Jan 23 23:54:06.153497 systemd-networkd[1201]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:54:06.153505 systemd-networkd[1201]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:54:06.155337 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:54:06.162528 systemd[1]: Reached target network.target - Network. Jan 23 23:54:06.173480 systemd-networkd[1201]: eth0: Link UP Jan 23 23:54:06.173488 systemd-networkd[1201]: eth0: Gained carrier Jan 23 23:54:06.173507 systemd-networkd[1201]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:54:06.193757 systemd-networkd[1201]: eth0: DHCPv4 address 172.31.18.95/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 23:54:06.454095 ignition[1123]: Ignition 2.19.0 Jan 23 23:54:06.454126 ignition[1123]: Stage: fetch-offline Jan 23 23:54:06.458653 ignition[1123]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:06.458696 ignition[1123]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:06.464041 ignition[1123]: Ignition finished successfully Jan 23 23:54:06.466081 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:54:06.484928 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 23:54:06.513986 ignition[1211]: Ignition 2.19.0 Jan 23 23:54:06.514022 ignition[1211]: Stage: fetch Jan 23 23:54:06.517820 ignition[1211]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:06.517865 ignition[1211]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:06.518522 ignition[1211]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:06.545504 ignition[1211]: PUT result: OK Jan 23 23:54:06.549546 ignition[1211]: parsed url from cmdline: "" Jan 23 23:54:06.549709 ignition[1211]: no config URL provided Jan 23 23:54:06.549726 ignition[1211]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 23:54:06.551518 ignition[1211]: no config at "/usr/lib/ignition/user.ign" Jan 23 23:54:06.551586 ignition[1211]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:06.555529 ignition[1211]: PUT result: OK Jan 23 23:54:06.555643 ignition[1211]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 23 23:54:06.559553 ignition[1211]: GET result: OK Jan 23 23:54:06.561130 ignition[1211]: parsing config with SHA512: 78a5457371068895f8621ab151cc559490687072e409bc4813f6b9e09e264b6bc08cb50cb78d39f305ce0e33ab01bdbce8d91932f4530b751d88da70ad73a7bd Jan 23 23:54:06.573143 unknown[1211]: fetched base config from "system" Jan 23 23:54:06.575187 unknown[1211]: fetched base config from "system" Jan 23 23:54:06.575304 unknown[1211]: fetched user config from "aws" Jan 23 23:54:06.576562 ignition[1211]: fetch: fetch complete Jan 23 23:54:06.576575 ignition[1211]: fetch: fetch passed Jan 23 23:54:06.576731 ignition[1211]: Ignition finished successfully Jan 23 23:54:06.587947 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 23:54:06.599942 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 23:54:06.643430 ignition[1218]: Ignition 2.19.0 Jan 23 23:54:06.643464 ignition[1218]: Stage: kargs Jan 23 23:54:06.644832 ignition[1218]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:06.644864 ignition[1218]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:06.645039 ignition[1218]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:06.647316 ignition[1218]: PUT result: OK Jan 23 23:54:06.659367 ignition[1218]: kargs: kargs passed Jan 23 23:54:06.659496 ignition[1218]: Ignition finished successfully Jan 23 23:54:06.665373 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 23:54:06.674916 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 23:54:06.703092 ignition[1224]: Ignition 2.19.0 Jan 23 23:54:06.703126 ignition[1224]: Stage: disks Jan 23 23:54:06.712960 ignition[1224]: no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:06.713007 ignition[1224]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:06.717600 ignition[1224]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:06.721089 ignition[1224]: PUT result: OK Jan 23 23:54:06.726604 ignition[1224]: disks: disks passed Jan 23 23:54:06.726751 ignition[1224]: Ignition finished successfully Jan 23 23:54:06.731690 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 23:54:06.737645 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 23:54:06.740497 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 23:54:06.748890 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:54:06.751421 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:54:06.760898 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:54:06.774053 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 23:54:06.822880 systemd-fsck[1232]: ROOT: clean, 14/553520 files, 52654/553472 blocks Jan 23 23:54:06.827755 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 23:54:06.840845 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 23:54:06.924756 kernel: EXT4-fs (nvme0n1p9): mounted filesystem 4f5f6971-6639-4171-835a-63d34aadb0e5 r/w with ordered data mode. Quota mode: none. Jan 23 23:54:06.925916 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 23:54:06.932513 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 23:54:06.947869 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:54:06.954938 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 23:54:06.961511 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 23:54:06.961833 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 23:54:06.961886 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:54:06.992672 kernel: BTRFS: device label OEM devid 1 transid 10 /dev/nvme0n1p6 scanned by mount (1251) Jan 23 23:54:06.995755 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 23:54:07.003939 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:54:07.003999 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:54:07.004027 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 23 23:54:07.010950 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:54:07.013877 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 23:54:07.019238 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:54:07.304141 initrd-setup-root[1275]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 23:54:07.324060 initrd-setup-root[1282]: cut: /sysroot/etc/group: No such file or directory Jan 23 23:54:07.333915 initrd-setup-root[1289]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 23:54:07.343473 initrd-setup-root[1296]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 23:54:07.681326 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 23:54:07.693014 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 23:54:07.700353 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 23:54:07.723025 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 23:54:07.727632 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:54:07.768693 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 23:54:07.771659 ignition[1363]: INFO : Ignition 2.19.0 Jan 23 23:54:07.771659 ignition[1363]: INFO : Stage: mount Jan 23 23:54:07.771659 ignition[1363]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:07.771659 ignition[1363]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:07.776920 ignition[1363]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:07.788538 ignition[1363]: INFO : PUT result: OK Jan 23 23:54:07.794095 ignition[1363]: INFO : mount: mount passed Jan 23 23:54:07.794095 ignition[1363]: INFO : Ignition finished successfully Jan 23 23:54:07.798933 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 23:54:07.817817 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 23:54:07.937939 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 23:54:07.963680 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 scanned by mount (1376) Jan 23 23:54:07.963752 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem 821118c6-9a33-48e6-be55-657b51b768c7 Jan 23 23:54:07.966319 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 23:54:07.966384 kernel: BTRFS info (device nvme0n1p6): using free space tree Jan 23 23:54:07.973676 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 23:54:07.976476 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 23:54:08.008747 systemd-networkd[1201]: eth0: Gained IPv6LL Jan 23 23:54:08.020366 ignition[1392]: INFO : Ignition 2.19.0 Jan 23 23:54:08.020366 ignition[1392]: INFO : Stage: files Jan 23 23:54:08.024323 ignition[1392]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:08.024323 ignition[1392]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:08.024323 ignition[1392]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:08.032991 ignition[1392]: INFO : PUT result: OK Jan 23 23:54:08.038695 ignition[1392]: DEBUG : files: compiled without relabeling support, skipping Jan 23 23:54:08.042295 ignition[1392]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 23:54:08.042295 ignition[1392]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 23:54:08.100377 ignition[1392]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 23:54:08.103930 ignition[1392]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 23:54:08.111195 unknown[1392]: wrote ssh authorized keys file for user: core Jan 23 23:54:08.114178 ignition[1392]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 23:54:08.118388 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 23:54:08.118388 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 23 23:54:08.250277 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 23:54:08.394753 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 23:54:08.394753 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 23:54:08.394753 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 23:54:08.394753 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:54:08.394753 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 23:54:08.394753 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:54:08.419809 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 23:54:08.419809 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:54:08.419809 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 23:54:08.419809 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:54:08.419809 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 23:54:08.419809 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 23:54:08.419809 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 23:54:08.419809 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 23:54:08.419809 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Jan 23 23:54:08.912162 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 23:54:09.301592 ignition[1392]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 23:54:09.301592 ignition[1392]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 23:54:09.309939 ignition[1392]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:54:09.309939 ignition[1392]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 23:54:09.309939 ignition[1392]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 23:54:09.309939 ignition[1392]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 23 23:54:09.309939 ignition[1392]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 23:54:09.309939 ignition[1392]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:54:09.309939 ignition[1392]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 23:54:09.309939 ignition[1392]: INFO : files: files passed Jan 23 23:54:09.309939 ignition[1392]: INFO : Ignition finished successfully Jan 23 23:54:09.341044 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 23:54:09.355138 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 23:54:09.365683 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 23:54:09.372247 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 23:54:09.372526 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 23:54:09.412975 initrd-setup-root-after-ignition[1425]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:54:09.418559 initrd-setup-root-after-ignition[1421]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:54:09.418559 initrd-setup-root-after-ignition[1421]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 23:54:09.428512 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:54:09.433379 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 23:54:09.446896 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 23:54:09.506151 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 23:54:09.507922 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 23:54:09.514519 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 23:54:09.516911 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 23:54:09.519857 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 23:54:09.522126 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 23:54:09.573877 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:54:09.586912 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 23:54:09.615857 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:54:09.621697 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:54:09.629003 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 23:54:09.633073 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 23:54:09.633378 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 23:54:09.638716 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 23:54:09.641581 systemd[1]: Stopped target basic.target - Basic System. Jan 23 23:54:09.651118 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 23:54:09.651857 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 23:54:09.662010 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 23:54:09.664964 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 23:54:09.667587 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 23:54:09.670798 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 23:54:09.679106 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 23:54:09.682518 systemd[1]: Stopped target swap.target - Swaps. Jan 23 23:54:09.687504 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 23:54:09.687903 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 23:54:09.701729 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:54:09.707192 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:54:09.710360 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 23:54:09.713544 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:54:09.717063 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 23:54:09.717315 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 23:54:09.730547 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 23:54:09.730881 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 23:54:09.733917 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 23:54:09.734144 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 23:54:09.752064 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 23:54:09.761826 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 23:54:09.766144 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 23:54:09.766514 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:54:09.773946 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 23:54:09.774215 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 23:54:09.800666 ignition[1445]: INFO : Ignition 2.19.0 Jan 23 23:54:09.800666 ignition[1445]: INFO : Stage: umount Jan 23 23:54:09.800666 ignition[1445]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 23:54:09.800666 ignition[1445]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 23:54:09.809998 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 23:54:09.830576 ignition[1445]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 23:54:09.830576 ignition[1445]: INFO : PUT result: OK Jan 23 23:54:09.830576 ignition[1445]: INFO : umount: umount passed Jan 23 23:54:09.830576 ignition[1445]: INFO : Ignition finished successfully Jan 23 23:54:09.810214 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 23:54:09.827534 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 23:54:09.831157 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 23:54:09.843397 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 23:54:09.845803 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 23:54:09.846001 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 23:54:09.851886 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 23:54:09.851997 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 23:54:09.852304 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 23:54:09.852378 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 23:54:09.852726 systemd[1]: Stopped target network.target - Network. Jan 23 23:54:09.852991 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 23:54:09.853078 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 23:54:09.853402 systemd[1]: Stopped target paths.target - Path Units. Jan 23 23:54:09.854486 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 23:54:09.864052 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:54:09.869738 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 23:54:09.871944 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 23:54:09.874520 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 23:54:09.874675 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 23:54:09.878881 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 23:54:09.878980 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 23:54:09.883064 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 23:54:09.883174 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 23:54:09.885710 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 23:54:09.885798 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 23:54:09.889385 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 23:54:09.897653 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 23:54:09.900429 systemd-networkd[1201]: eth0: DHCPv6 lease lost Jan 23 23:54:09.911482 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 23:54:09.911735 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 23:54:09.926709 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 23:54:09.927143 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 23:54:09.935338 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 23:54:09.935560 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 23:54:09.955001 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 23:54:09.955515 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:54:09.962286 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 23:54:09.962432 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 23:54:09.977796 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 23:54:09.984768 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 23:54:09.984903 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 23:54:09.988271 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 23:54:09.988382 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:54:09.993463 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 23:54:09.994080 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 23:54:09.998294 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 23:54:09.998425 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:54:10.002499 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:54:10.049628 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 23:54:10.050789 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 23:54:10.058153 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 23:54:10.059746 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:54:10.077652 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 23:54:10.077780 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 23:54:10.080551 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 23:54:10.081033 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:54:10.092797 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 23:54:10.092923 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 23:54:10.095550 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 23:54:10.095684 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 23:54:10.098410 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 23:54:10.098523 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 23:54:10.112676 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 23:54:10.120440 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 23:54:10.120578 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:54:10.130116 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 23 23:54:10.130228 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:54:10.136138 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 23:54:10.136261 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:54:10.140843 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 23:54:10.140961 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:54:10.168578 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 23:54:10.170891 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 23:54:10.178099 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 23:54:10.190936 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 23:54:10.212558 systemd[1]: Switching root. Jan 23 23:54:10.283395 systemd-journald[251]: Journal stopped Jan 23 23:54:13.048701 systemd-journald[251]: Received SIGTERM from PID 1 (systemd). Jan 23 23:54:13.048855 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 23:54:13.048904 kernel: SELinux: policy capability open_perms=1 Jan 23 23:54:13.048938 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 23:54:13.048970 kernel: SELinux: policy capability always_check_network=0 Jan 23 23:54:13.049013 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 23:54:13.049047 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 23:54:13.049079 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 23:54:13.049117 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 23:54:13.049161 kernel: audit: type=1403 audit(1769212450.900:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 23:54:13.049198 systemd[1]: Successfully loaded SELinux policy in 64ms. Jan 23 23:54:13.049241 systemd[1]: Relabeled /dev, /dev/shm, /run, /sys/fs/cgroup in 28.753ms. Jan 23 23:54:13.049279 systemd[1]: systemd 255 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP +GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT default-hierarchy=unified) Jan 23 23:54:13.049316 systemd[1]: Detected virtualization amazon. Jan 23 23:54:13.049351 systemd[1]: Detected architecture arm64. Jan 23 23:54:13.049385 systemd[1]: Detected first boot. Jan 23 23:54:13.049422 systemd[1]: Initializing machine ID from VM UUID. Jan 23 23:54:13.049465 zram_generator::config[1487]: No configuration found. Jan 23 23:54:13.049512 systemd[1]: Populated /etc with preset unit settings. Jan 23 23:54:13.049547 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 23:54:13.049579 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 23:54:13.056687 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 23:54:13.056769 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 23:54:13.056808 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 23:54:13.056842 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 23:54:13.056885 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 23:54:13.056921 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 23:54:13.056954 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 23:54:13.056985 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 23:54:13.057020 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 23:54:13.057056 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 23:54:13.057089 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 23:54:13.057126 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 23:54:13.057161 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 23:54:13.057200 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 23:54:13.057235 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 23:54:13.057270 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 23:54:13.057304 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 23:54:13.057340 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 23:54:13.057372 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 23:54:13.057406 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 23:54:13.057443 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 23:54:13.057475 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 23:54:13.057507 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 23:54:13.057542 systemd[1]: Reached target slices.target - Slice Units. Jan 23 23:54:13.057575 systemd[1]: Reached target swap.target - Swaps. Jan 23 23:54:13.057635 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 23:54:13.057679 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 23:54:13.057716 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 23:54:13.057751 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 23:54:13.057786 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 23:54:13.057828 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 23:54:13.057864 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 23:54:13.057895 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 23:54:13.057931 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 23:54:13.057965 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 23:54:13.057998 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 23:54:13.058029 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 23:54:13.058067 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 23:54:13.058105 systemd[1]: Reached target machines.target - Containers. Jan 23 23:54:13.058142 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 23:54:13.058175 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:54:13.058207 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 23:54:13.058244 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 23:54:13.058293 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:54:13.058347 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:54:13.058389 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:54:13.058424 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 23:54:13.058467 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:54:13.058500 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 23:54:13.058532 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 23:54:13.058569 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 23:54:13.064702 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 23:54:13.064817 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 23:54:13.064859 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 23:54:13.064893 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 23:54:13.064925 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 23:54:13.064968 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 23:54:13.065000 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 23:54:13.065031 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 23:54:13.065063 systemd[1]: Stopped verity-setup.service. Jan 23 23:54:13.065094 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 23:54:13.065126 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 23:54:13.065157 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 23:54:13.065189 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 23:54:13.065229 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 23:54:13.065261 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 23:54:13.065296 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 23:54:13.065331 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 23:54:13.065362 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 23:54:13.065398 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:54:13.065430 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:54:13.065461 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:54:13.065493 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:54:13.065526 kernel: fuse: init (API version 7.39) Jan 23 23:54:13.065556 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 23:54:13.065587 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 23:54:13.065660 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 23:54:13.065702 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 23:54:13.065745 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 23:54:13.065780 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 23:54:13.065811 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 23:54:13.065842 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 23:54:13.065933 systemd-journald[1569]: Collecting audit messages is disabled. Jan 23 23:54:13.066000 kernel: loop: module loaded Jan 23 23:54:13.066037 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 23:54:13.066068 kernel: ACPI: bus type drm_connector registered Jan 23 23:54:13.066100 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 23:54:13.066134 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 23:54:13.066165 systemd-journald[1569]: Journal started Jan 23 23:54:13.066220 systemd-journald[1569]: Runtime Journal (/run/log/journal/ec2cd8686b8918ba2058b5f0d840e52d) is 8.0M, max 75.3M, 67.3M free. Jan 23 23:54:12.279839 systemd[1]: Queued start job for default target multi-user.target. Jan 23 23:54:12.347748 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 23 23:54:12.348790 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 23:54:13.069711 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 23:54:13.087688 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management (Varlink). Jan 23 23:54:13.104692 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 23:54:13.118435 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 23:54:13.123689 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:54:13.143672 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 23:54:13.143787 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:54:13.165106 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 23:54:13.179128 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 23:54:13.191732 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 23:54:13.208908 systemd-tmpfiles[1583]: ACLs are not supported, ignoring. Jan 23 23:54:13.208957 systemd-tmpfiles[1583]: ACLs are not supported, ignoring. Jan 23 23:54:13.212067 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 23:54:13.223746 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 23:54:13.227267 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:54:13.227703 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:54:13.232530 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:54:13.233757 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:54:13.238996 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 23:54:13.242472 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 23:54:13.303332 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 23:54:13.307544 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 23:54:13.332394 kernel: loop0: detected capacity change from 0 to 200800 Jan 23 23:54:13.340270 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 23:54:13.354996 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 23:54:13.367920 systemd[1]: Starting systemd-machine-id-commit.service - Commit a transient machine-id on disk... Jan 23 23:54:13.371927 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:54:13.380896 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 23:54:13.457671 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 23:54:13.464748 systemd-journald[1569]: Time spent on flushing to /var/log/journal/ec2cd8686b8918ba2058b5f0d840e52d is 137.018ms for 910 entries. Jan 23 23:54:13.464748 systemd-journald[1569]: System Journal (/var/log/journal/ec2cd8686b8918ba2058b5f0d840e52d) is 8.0M, max 195.6M, 187.6M free. Jan 23 23:54:13.636183 systemd-journald[1569]: Received client request to flush runtime journal. Jan 23 23:54:13.636310 kernel: loop1: detected capacity change from 0 to 52536 Jan 23 23:54:13.636363 kernel: loop2: detected capacity change from 0 to 114328 Jan 23 23:54:13.471896 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 23:54:13.584333 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 23:54:13.602163 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 23:54:13.634058 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 23:54:13.643536 systemd[1]: Finished systemd-machine-id-commit.service - Commit a transient machine-id on disk. Jan 23 23:54:13.655398 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 23:54:13.677434 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 23:54:13.692975 systemd[1]: Starting systemd-udev-settle.service - Wait for udev To Complete Device Initialization... Jan 23 23:54:13.732092 systemd-tmpfiles[1634]: ACLs are not supported, ignoring. Jan 23 23:54:13.732144 systemd-tmpfiles[1634]: ACLs are not supported, ignoring. Jan 23 23:54:13.757825 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 23:54:13.764497 udevadm[1640]: systemd-udev-settle.service is deprecated. Please fix lvm2-activation.service, lvm2-activation-early.service not to pull it in. Jan 23 23:54:13.787808 kernel: loop3: detected capacity change from 0 to 114432 Jan 23 23:54:13.896656 kernel: loop4: detected capacity change from 0 to 200800 Jan 23 23:54:13.923662 kernel: loop5: detected capacity change from 0 to 52536 Jan 23 23:54:13.945675 kernel: loop6: detected capacity change from 0 to 114328 Jan 23 23:54:13.962662 kernel: loop7: detected capacity change from 0 to 114432 Jan 23 23:54:13.973727 (sd-merge)[1644]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 23 23:54:13.975535 (sd-merge)[1644]: Merged extensions into '/usr'. Jan 23 23:54:13.985865 systemd[1]: Reloading requested from client PID 1600 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 23:54:13.985905 systemd[1]: Reloading... Jan 23 23:54:14.210702 zram_generator::config[1673]: No configuration found. Jan 23 23:54:14.557508 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:54:14.684236 systemd[1]: Reloading finished in 697 ms. Jan 23 23:54:14.733187 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 23:54:14.740412 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 23:54:14.757952 systemd[1]: Starting ensure-sysext.service... Jan 23 23:54:14.767467 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 23:54:14.785061 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 23:54:14.808933 systemd[1]: Reloading requested from client PID 1722 ('systemctl') (unit ensure-sysext.service)... Jan 23 23:54:14.808979 systemd[1]: Reloading... Jan 23 23:54:14.840724 systemd-tmpfiles[1723]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 23:54:14.842251 systemd-tmpfiles[1723]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 23:54:14.844572 systemd-tmpfiles[1723]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 23:54:14.845496 systemd-tmpfiles[1723]: ACLs are not supported, ignoring. Jan 23 23:54:14.845920 systemd-tmpfiles[1723]: ACLs are not supported, ignoring. Jan 23 23:54:14.859267 systemd-tmpfiles[1723]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:54:14.861900 systemd-tmpfiles[1723]: Skipping /boot Jan 23 23:54:14.879706 ldconfig[1595]: /sbin/ldconfig: /lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 23:54:14.903386 systemd-tmpfiles[1723]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 23:54:14.905882 systemd-tmpfiles[1723]: Skipping /boot Jan 23 23:54:14.916346 systemd-udevd[1724]: Using default interface naming scheme 'v255'. Jan 23 23:54:15.065401 zram_generator::config[1764]: No configuration found. Jan 23 23:54:15.223925 (udev-worker)[1761]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:54:15.445195 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1782) Jan 23 23:54:15.480562 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:54:15.643084 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 23:54:15.644323 systemd[1]: Reloading finished in 834 ms. Jan 23 23:54:15.715147 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 23:54:15.721219 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 23:54:15.725762 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 23:54:15.863727 systemd[1]: Finished systemd-udev-settle.service - Wait for udev To Complete Device Initialization. Jan 23 23:54:15.896306 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 23:54:15.909373 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:54:15.929236 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 23:54:15.932438 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:54:15.940211 systemd[1]: Starting lvm2-activation-early.service - Activation of LVM2 logical volumes... Jan 23 23:54:15.953480 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 23:54:15.967540 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 23:54:15.981480 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 23:54:15.985858 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:54:15.992235 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 23:54:16.003768 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 23:54:16.017761 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 23:54:16.032908 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 23:54:16.058550 lvm[1923]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:54:16.049988 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 23:54:16.065136 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 23:54:16.089337 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 23:54:16.104367 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 23:54:16.107209 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 23:54:16.107801 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 23:54:16.128899 systemd[1]: Finished ensure-sysext.service. Jan 23 23:54:16.162148 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 23:54:16.222358 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 23:54:16.247924 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 23:54:16.248545 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 23:54:16.269109 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 23:54:16.269601 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 23:54:16.274890 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 23:54:16.288941 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 23:54:16.289428 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 23:54:16.299722 systemd[1]: Finished lvm2-activation-early.service - Activation of LVM2 logical volumes. Jan 23 23:54:16.305410 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 23:54:16.326971 systemd[1]: Starting lvm2-activation.service - Activation of LVM2 logical volumes... Jan 23 23:54:16.331670 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 23:54:16.333768 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 23:54:16.339765 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 23:54:16.344404 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 23:54:16.365668 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 23:54:16.379047 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 23:54:16.390370 lvm[1953]: WARNING: Failed to connect to lvmetad. Falling back to device scanning. Jan 23 23:54:16.421266 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 23:54:16.425989 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 23:54:16.449675 augenrules[1965]: No rules Jan 23 23:54:16.453781 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:54:16.470352 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 23:54:16.483735 systemd[1]: Finished lvm2-activation.service - Activation of LVM2 logical volumes. Jan 23 23:54:16.486859 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 23:54:16.576396 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 23:54:16.621153 systemd-networkd[1935]: lo: Link UP Jan 23 23:54:16.621932 systemd-networkd[1935]: lo: Gained carrier Jan 23 23:54:16.625603 systemd-networkd[1935]: Enumeration completed Jan 23 23:54:16.626102 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 23:54:16.629743 systemd-networkd[1935]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:54:16.629752 systemd-networkd[1935]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 23:54:16.632868 systemd-networkd[1935]: eth0: Link UP Jan 23 23:54:16.633465 systemd-networkd[1935]: eth0: Gained carrier Jan 23 23:54:16.633694 systemd-networkd[1935]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 23:54:16.639092 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 23:54:16.645749 systemd-networkd[1935]: eth0: DHCPv4 address 172.31.18.95/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 23:54:16.650931 systemd-resolved[1936]: Positive Trust Anchors: Jan 23 23:54:16.650975 systemd-resolved[1936]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 23:54:16.651040 systemd-resolved[1936]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 23:54:16.694913 systemd-resolved[1936]: Defaulting to hostname 'linux'. Jan 23 23:54:16.698653 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 23:54:16.701434 systemd[1]: Reached target network.target - Network. Jan 23 23:54:16.703787 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 23:54:16.706780 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 23:54:16.709605 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 23:54:16.712698 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 23:54:16.716108 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 23:54:16.718937 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 23:54:16.721891 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 23:54:16.725288 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 23:54:16.725361 systemd[1]: Reached target paths.target - Path Units. Jan 23 23:54:16.727839 systemd[1]: Reached target timers.target - Timer Units. Jan 23 23:54:16.731885 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 23:54:16.737396 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 23:54:16.747253 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 23:54:16.750926 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 23:54:16.753654 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 23:54:16.756522 systemd[1]: Reached target basic.target - Basic System. Jan 23 23:54:16.758966 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:54:16.759038 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 23:54:16.769815 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 23:54:16.775859 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 23:54:16.781214 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 23:54:16.786922 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 23:54:16.796081 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 23:54:16.798587 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 23:54:16.806083 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 23:54:16.814913 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 23:54:16.822222 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 23:54:16.830363 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 23 23:54:16.840113 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 23:54:16.849101 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 23:54:16.880265 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 23:54:16.884536 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 23:54:16.887981 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 23:54:16.890340 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 23:54:16.899935 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 23:54:16.965723 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 23:54:16.967731 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 23:54:17.010669 jq[1989]: false Jan 23 23:54:17.019163 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 23:54:17.020444 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 23:54:17.063737 jq[1999]: true Jan 23 23:54:17.108502 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 23:54:17.108873 coreos-metadata[1987]: Jan 23 23:54:17.107 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 23:54:17.108873 coreos-metadata[1987]: Jan 23 23:54:17.107 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 23 23:54:17.110716 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 23:54:17.114796 coreos-metadata[1987]: Jan 23 23:54:17.113 INFO Fetch successful Jan 23 23:54:17.114796 coreos-metadata[1987]: Jan 23 23:54:17.113 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 23 23:54:17.124752 extend-filesystems[1990]: Found loop4 Jan 23 23:54:17.124752 extend-filesystems[1990]: Found loop5 Jan 23 23:54:17.124752 extend-filesystems[1990]: Found loop6 Jan 23 23:54:17.124752 extend-filesystems[1990]: Found loop7 Jan 23 23:54:17.124752 extend-filesystems[1990]: Found nvme0n1 Jan 23 23:54:17.124752 extend-filesystems[1990]: Found nvme0n1p1 Jan 23 23:54:17.149404 coreos-metadata[1987]: Jan 23 23:54:17.122 INFO Fetch successful Jan 23 23:54:17.149404 coreos-metadata[1987]: Jan 23 23:54:17.122 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 23 23:54:17.149404 coreos-metadata[1987]: Jan 23 23:54:17.122 INFO Fetch successful Jan 23 23:54:17.149404 coreos-metadata[1987]: Jan 23 23:54:17.122 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 23 23:54:17.149404 coreos-metadata[1987]: Jan 23 23:54:17.146 INFO Fetch successful Jan 23 23:54:17.149404 coreos-metadata[1987]: Jan 23 23:54:17.146 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 23 23:54:17.149404 coreos-metadata[1987]: Jan 23 23:54:17.148 INFO Fetch failed with 404: resource not found Jan 23 23:54:17.149404 coreos-metadata[1987]: Jan 23 23:54:17.148 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 23 23:54:17.149969 tar[2016]: linux-arm64/LICENSE Jan 23 23:54:17.149969 tar[2016]: linux-arm64/helm Jan 23 23:54:17.150422 extend-filesystems[1990]: Found nvme0n1p2 Jan 23 23:54:17.150422 extend-filesystems[1990]: Found nvme0n1p3 Jan 23 23:54:17.150422 extend-filesystems[1990]: Found usr Jan 23 23:54:17.150422 extend-filesystems[1990]: Found nvme0n1p4 Jan 23 23:54:17.150422 extend-filesystems[1990]: Found nvme0n1p6 Jan 23 23:54:17.150422 extend-filesystems[1990]: Found nvme0n1p7 Jan 23 23:54:17.150422 extend-filesystems[1990]: Found nvme0n1p9 Jan 23 23:54:17.150422 extend-filesystems[1990]: Checking size of /dev/nvme0n1p9 Jan 23 23:54:17.228400 ntpd[1992]: 23 Jan 23:54:17 ntpd[1992]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 21:53:23 UTC 2026 (1): Starting Jan 23 23:54:17.228400 ntpd[1992]: 23 Jan 23:54:17 ntpd[1992]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 23:54:17.228400 ntpd[1992]: 23 Jan 23:54:17 ntpd[1992]: ---------------------------------------------------- Jan 23 23:54:17.228400 ntpd[1992]: 23 Jan 23:54:17 ntpd[1992]: ntp-4 is maintained by Network Time Foundation, Jan 23 23:54:17.228400 ntpd[1992]: 23 Jan 23:54:17 ntpd[1992]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 23:54:17.228400 ntpd[1992]: 23 Jan 23:54:17 ntpd[1992]: corporation. Support and training for ntp-4 are Jan 23 23:54:17.228400 ntpd[1992]: 23 Jan 23:54:17 ntpd[1992]: available at https://www.nwtime.org/support Jan 23 23:54:17.228400 ntpd[1992]: 23 Jan 23:54:17 ntpd[1992]: ---------------------------------------------------- Jan 23 23:54:17.228400 ntpd[1992]: 23 Jan 23:54:17 ntpd[1992]: proto: precision = 0.108 usec (-23) Jan 23 23:54:17.228400 ntpd[1992]: 23 Jan 23:54:17 ntpd[1992]: basedate set to 2026-01-11 Jan 23 23:54:17.228400 ntpd[1992]: 23 Jan 23:54:17 ntpd[1992]: gps base set to 2026-01-11 (week 2401) Jan 23 23:54:17.228400 ntpd[1992]: 23 Jan 23:54:17 ntpd[1992]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 23:54:17.228400 ntpd[1992]: 23 Jan 23:54:17 ntpd[1992]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 23:54:17.228400 ntpd[1992]: 23 Jan 23:54:17 ntpd[1992]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 23:54:17.228400 ntpd[1992]: 23 Jan 23:54:17 ntpd[1992]: Listen normally on 3 eth0 172.31.18.95:123 Jan 23 23:54:17.228400 ntpd[1992]: 23 Jan 23:54:17 ntpd[1992]: Listen normally on 4 lo [::1]:123 Jan 23 23:54:17.228400 ntpd[1992]: 23 Jan 23:54:17 ntpd[1992]: bind(21) AF_INET6 fe80::425:71ff:fe56:d01b%2#123 flags 0x11 failed: Cannot assign requested address Jan 23 23:54:17.228400 ntpd[1992]: 23 Jan 23:54:17 ntpd[1992]: unable to create socket on eth0 (5) for fe80::425:71ff:fe56:d01b%2#123 Jan 23 23:54:17.228400 ntpd[1992]: 23 Jan 23:54:17 ntpd[1992]: failed to init interface for address fe80::425:71ff:fe56:d01b%2 Jan 23 23:54:17.228400 ntpd[1992]: 23 Jan 23:54:17 ntpd[1992]: Listening on routing socket on fd #21 for interface updates Jan 23 23:54:17.228400 ntpd[1992]: 23 Jan 23:54:17 ntpd[1992]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:54:17.228400 ntpd[1992]: 23 Jan 23:54:17 ntpd[1992]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:54:17.174432 ntpd[1992]: ntpd 4.2.8p17@1.4004-o Fri Jan 23 21:53:23 UTC 2026 (1): Starting Jan 23 23:54:17.174222 (ntainerd)[2023]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 23:54:17.259423 coreos-metadata[1987]: Jan 23 23:54:17.153 INFO Fetch successful Jan 23 23:54:17.259423 coreos-metadata[1987]: Jan 23 23:54:17.153 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 23 23:54:17.259423 coreos-metadata[1987]: Jan 23 23:54:17.153 INFO Fetch successful Jan 23 23:54:17.259423 coreos-metadata[1987]: Jan 23 23:54:17.153 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 23 23:54:17.259423 coreos-metadata[1987]: Jan 23 23:54:17.153 INFO Fetch successful Jan 23 23:54:17.259423 coreos-metadata[1987]: Jan 23 23:54:17.153 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 23 23:54:17.259423 coreos-metadata[1987]: Jan 23 23:54:17.158 INFO Fetch successful Jan 23 23:54:17.259423 coreos-metadata[1987]: Jan 23 23:54:17.158 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 23 23:54:17.259423 coreos-metadata[1987]: Jan 23 23:54:17.158 INFO Fetch successful Jan 23 23:54:17.259970 update_engine[1998]: I20260123 23:54:17.237512 1998 main.cc:92] Flatcar Update Engine starting Jan 23 23:54:17.174483 ntpd[1992]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 23:54:17.176484 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 23 23:54:17.174506 ntpd[1992]: ---------------------------------------------------- Jan 23 23:54:17.204658 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 23:54:17.284428 extend-filesystems[1990]: Resized partition /dev/nvme0n1p9 Jan 23 23:54:17.320694 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 23 23:54:17.174526 ntpd[1992]: ntp-4 is maintained by Network Time Foundation, Jan 23 23:54:17.215736 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 23:54:17.321502 update_engine[1998]: I20260123 23:54:17.298733 1998 update_check_scheduler.cc:74] Next update check in 5m53s Jan 23 23:54:17.321562 extend-filesystems[2039]: resize2fs 1.47.1 (20-May-2024) Jan 23 23:54:17.327235 jq[2026]: true Jan 23 23:54:17.174545 ntpd[1992]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 23:54:17.215811 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 23:54:17.174564 ntpd[1992]: corporation. Support and training for ntp-4 are Jan 23 23:54:17.218977 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 23:54:17.174583 ntpd[1992]: available at https://www.nwtime.org/support Jan 23 23:54:17.219018 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 23:54:17.174603 ntpd[1992]: ---------------------------------------------------- Jan 23 23:54:17.271221 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 23:54:17.182699 dbus-daemon[1988]: [system] SELinux support is enabled Jan 23 23:54:17.288462 systemd[1]: Started update-engine.service - Update Engine. Jan 23 23:54:17.183787 ntpd[1992]: proto: precision = 0.108 usec (-23) Jan 23 23:54:17.315965 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 23:54:17.184916 ntpd[1992]: basedate set to 2026-01-11 Jan 23 23:54:17.184949 ntpd[1992]: gps base set to 2026-01-11 (week 2401) Jan 23 23:54:17.192427 ntpd[1992]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 23:54:17.192505 ntpd[1992]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 23:54:17.192876 ntpd[1992]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 23:54:17.192946 ntpd[1992]: Listen normally on 3 eth0 172.31.18.95:123 Jan 23 23:54:17.193014 ntpd[1992]: Listen normally on 4 lo [::1]:123 Jan 23 23:54:17.193091 ntpd[1992]: bind(21) AF_INET6 fe80::425:71ff:fe56:d01b%2#123 flags 0x11 failed: Cannot assign requested address Jan 23 23:54:17.193134 ntpd[1992]: unable to create socket on eth0 (5) for fe80::425:71ff:fe56:d01b%2#123 Jan 23 23:54:17.193162 ntpd[1992]: failed to init interface for address fe80::425:71ff:fe56:d01b%2 Jan 23 23:54:17.193216 ntpd[1992]: Listening on routing socket on fd #21 for interface updates Jan 23 23:54:17.200692 ntpd[1992]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:54:17.200753 ntpd[1992]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 23:54:17.222396 dbus-daemon[1988]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.1' (uid=244 pid=1935 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 23:54:17.250594 dbus-daemon[1988]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 23:54:17.407859 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 23:54:17.411151 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 23:54:17.430912 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 23 23:54:17.454303 extend-filesystems[2039]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 23 23:54:17.454303 extend-filesystems[2039]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 23 23:54:17.454303 extend-filesystems[2039]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 23 23:54:17.459018 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 23:54:17.470530 extend-filesystems[1990]: Resized filesystem in /dev/nvme0n1p9 Jan 23 23:54:17.462163 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 23:54:17.484585 systemd-logind[1997]: Watching system buttons on /dev/input/event0 (Power Button) Jan 23 23:54:17.496342 systemd-logind[1997]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 23 23:54:17.496727 systemd-logind[1997]: New seat seat0. Jan 23 23:54:17.501260 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 23:54:17.583671 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (1787) Jan 23 23:54:17.612341 bash[2079]: Updated "/home/core/.ssh/authorized_keys" Jan 23 23:54:17.616324 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 23:54:17.657145 systemd[1]: Starting sshkeys.service... Jan 23 23:54:17.678185 systemd-networkd[1935]: eth0: Gained IPv6LL Jan 23 23:54:17.699521 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 23:54:17.713318 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 23:54:17.727239 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 23 23:54:17.750115 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:54:17.758630 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 23:54:17.772257 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 23:54:17.778995 locksmithd[2043]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 23:54:17.789204 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 23:54:17.854638 dbus-daemon[1988]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 23:54:17.855179 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 23:54:17.863499 dbus-daemon[1988]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2040 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 23:54:17.931901 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 23:54:17.949041 amazon-ssm-agent[2110]: Initializing new seelog logger Jan 23 23:54:17.959453 amazon-ssm-agent[2110]: New Seelog Logger Creation Complete Jan 23 23:54:17.959453 amazon-ssm-agent[2110]: 2026/01/23 23:54:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:17.959453 amazon-ssm-agent[2110]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:17.962871 amazon-ssm-agent[2110]: 2026/01/23 23:54:17 processing appconfig overrides Jan 23 23:54:17.967642 amazon-ssm-agent[2110]: 2026/01/23 23:54:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:17.967642 amazon-ssm-agent[2110]: 2026-01-23 23:54:17 INFO Proxy environment variables: Jan 23 23:54:17.973027 amazon-ssm-agent[2110]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:17.973027 amazon-ssm-agent[2110]: 2026/01/23 23:54:17 processing appconfig overrides Jan 23 23:54:17.973027 amazon-ssm-agent[2110]: 2026/01/23 23:54:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:17.973027 amazon-ssm-agent[2110]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:17.973027 amazon-ssm-agent[2110]: 2026/01/23 23:54:17 processing appconfig overrides Jan 23 23:54:17.986398 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 23:54:17.991924 amazon-ssm-agent[2110]: 2026/01/23 23:54:17 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:17.991924 amazon-ssm-agent[2110]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 23:54:17.991924 amazon-ssm-agent[2110]: 2026/01/23 23:54:17 processing appconfig overrides Jan 23 23:54:18.020720 polkitd[2132]: Started polkitd version 121 Jan 23 23:54:18.042217 polkitd[2132]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 23:54:18.044816 polkitd[2132]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 23:54:18.045966 polkitd[2132]: Finished loading, compiling and executing 2 rules Jan 23 23:54:18.057320 dbus-daemon[1988]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 23:54:18.057605 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 23:54:18.061256 polkitd[2132]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 23:54:18.073005 amazon-ssm-agent[2110]: 2026-01-23 23:54:17 INFO https_proxy: Jan 23 23:54:18.126598 systemd-hostnamed[2040]: Hostname set to (transient) Jan 23 23:54:18.133772 systemd-resolved[1936]: System hostname changed to 'ip-172-31-18-95'. Jan 23 23:54:18.173738 amazon-ssm-agent[2110]: 2026-01-23 23:54:17 INFO http_proxy: Jan 23 23:54:18.278143 amazon-ssm-agent[2110]: 2026-01-23 23:54:17 INFO no_proxy: Jan 23 23:54:18.303948 coreos-metadata[2118]: Jan 23 23:54:18.303 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 23:54:18.311324 coreos-metadata[2118]: Jan 23 23:54:18.309 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 23 23:54:18.315640 coreos-metadata[2118]: Jan 23 23:54:18.313 INFO Fetch successful Jan 23 23:54:18.315640 coreos-metadata[2118]: Jan 23 23:54:18.313 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 23:54:18.318719 coreos-metadata[2118]: Jan 23 23:54:18.317 INFO Fetch successful Jan 23 23:54:18.325407 unknown[2118]: wrote ssh authorized keys file for user: core Jan 23 23:54:18.381083 amazon-ssm-agent[2110]: 2026-01-23 23:54:17 INFO Checking if agent identity type OnPrem can be assumed Jan 23 23:54:18.402661 update-ssh-keys[2196]: Updated "/home/core/.ssh/authorized_keys" Jan 23 23:54:18.406930 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 23:54:18.414066 containerd[2023]: time="2026-01-23T23:54:18.413941260Z" level=info msg="starting containerd" revision=174e0d1785eeda18dc2beba45e1d5a188771636b version=v1.7.21 Jan 23 23:54:18.426762 systemd[1]: Finished sshkeys.service. Jan 23 23:54:18.482183 amazon-ssm-agent[2110]: 2026-01-23 23:54:17 INFO Checking if agent identity type EC2 can be assumed Jan 23 23:54:18.534736 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 23:54:18.585360 containerd[2023]: time="2026-01-23T23:54:18.585164484Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.aufs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:18.586252 amazon-ssm-agent[2110]: 2026-01-23 23:54:18 INFO Agent will take identity from EC2 Jan 23 23:54:18.590761 containerd[2023]: time="2026-01-23T23:54:18.590694072Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.aufs\"..." error="aufs is not supported (modprobe aufs failed: exit status 1 \"modprobe: FATAL: Module aufs not found in directory /lib/modules/6.6.119-flatcar\\n\"): skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:54:18.591113 containerd[2023]: time="2026-01-23T23:54:18.591082872Z" level=info msg="loading plugin \"io.containerd.event.v1.exchange\"..." type=io.containerd.event.v1 Jan 23 23:54:18.591316 containerd[2023]: time="2026-01-23T23:54:18.591285840Z" level=info msg="loading plugin \"io.containerd.internal.v1.opt\"..." type=io.containerd.internal.v1 Jan 23 23:54:18.593688 containerd[2023]: time="2026-01-23T23:54:18.591722616Z" level=info msg="loading plugin \"io.containerd.warning.v1.deprecations\"..." type=io.containerd.warning.v1 Jan 23 23:54:18.593688 containerd[2023]: time="2026-01-23T23:54:18.591766152Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:18.593688 containerd[2023]: time="2026-01-23T23:54:18.591893016Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.blockfile\"..." error="no scratch file generator: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:54:18.593688 containerd[2023]: time="2026-01-23T23:54:18.591921732Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:18.593688 containerd[2023]: time="2026-01-23T23:54:18.592201032Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.btrfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:54:18.593688 containerd[2023]: time="2026-01-23T23:54:18.592231884Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:18.593688 containerd[2023]: time="2026-01-23T23:54:18.592262796Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.devmapper\"..." error="devmapper not configured: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:54:18.593688 containerd[2023]: time="2026-01-23T23:54:18.592287132Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.native\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:18.593688 containerd[2023]: time="2026-01-23T23:54:18.592437264Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.overlayfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:18.595176 containerd[2023]: time="2026-01-23T23:54:18.595131336Z" level=info msg="loading plugin \"io.containerd.snapshotter.v1.zfs\"..." type=io.containerd.snapshotter.v1 Jan 23 23:54:18.595792 containerd[2023]: time="2026-01-23T23:54:18.595746840Z" level=info msg="skip loading plugin \"io.containerd.snapshotter.v1.zfs\"..." error="path /var/lib/containerd/io.containerd.snapshotter.v1.zfs must be a zfs filesystem to be used with the zfs snapshotter: skip plugin" type=io.containerd.snapshotter.v1 Jan 23 23:54:18.596327 containerd[2023]: time="2026-01-23T23:54:18.596294712Z" level=info msg="loading plugin \"io.containerd.content.v1.content\"..." type=io.containerd.content.v1 Jan 23 23:54:18.596658 containerd[2023]: time="2026-01-23T23:54:18.596599860Z" level=info msg="loading plugin \"io.containerd.metadata.v1.bolt\"..." type=io.containerd.metadata.v1 Jan 23 23:54:18.597399 containerd[2023]: time="2026-01-23T23:54:18.597332484Z" level=info msg="metadata content store policy set" policy=shared Jan 23 23:54:18.608955 containerd[2023]: time="2026-01-23T23:54:18.608902440Z" level=info msg="loading plugin \"io.containerd.gc.v1.scheduler\"..." type=io.containerd.gc.v1 Jan 23 23:54:18.609173 containerd[2023]: time="2026-01-23T23:54:18.609146100Z" level=info msg="loading plugin \"io.containerd.differ.v1.walking\"..." type=io.containerd.differ.v1 Jan 23 23:54:18.610455 containerd[2023]: time="2026-01-23T23:54:18.609942288Z" level=info msg="loading plugin \"io.containerd.lease.v1.manager\"..." type=io.containerd.lease.v1 Jan 23 23:54:18.610455 containerd[2023]: time="2026-01-23T23:54:18.609990636Z" level=info msg="loading plugin \"io.containerd.streaming.v1.manager\"..." type=io.containerd.streaming.v1 Jan 23 23:54:18.610455 containerd[2023]: time="2026-01-23T23:54:18.610035576Z" level=info msg="loading plugin \"io.containerd.runtime.v1.linux\"..." type=io.containerd.runtime.v1 Jan 23 23:54:18.610455 containerd[2023]: time="2026-01-23T23:54:18.610371636Z" level=info msg="loading plugin \"io.containerd.monitor.v1.cgroups\"..." type=io.containerd.monitor.v1 Jan 23 23:54:18.613719 containerd[2023]: time="2026-01-23T23:54:18.612294696Z" level=info msg="loading plugin \"io.containerd.runtime.v2.task\"..." type=io.containerd.runtime.v2 Jan 23 23:54:18.613719 containerd[2023]: time="2026-01-23T23:54:18.612522156Z" level=info msg="loading plugin \"io.containerd.runtime.v2.shim\"..." type=io.containerd.runtime.v2 Jan 23 23:54:18.619645 containerd[2023]: time="2026-01-23T23:54:18.613994581Z" level=info msg="loading plugin \"io.containerd.sandbox.store.v1.local\"..." type=io.containerd.sandbox.store.v1 Jan 23 23:54:18.619645 containerd[2023]: time="2026-01-23T23:54:18.614046025Z" level=info msg="loading plugin \"io.containerd.sandbox.controller.v1.local\"..." type=io.containerd.sandbox.controller.v1 Jan 23 23:54:18.619645 containerd[2023]: time="2026-01-23T23:54:18.614079865Z" level=info msg="loading plugin \"io.containerd.service.v1.containers-service\"..." type=io.containerd.service.v1 Jan 23 23:54:18.619645 containerd[2023]: time="2026-01-23T23:54:18.614112337Z" level=info msg="loading plugin \"io.containerd.service.v1.content-service\"..." type=io.containerd.service.v1 Jan 23 23:54:18.619645 containerd[2023]: time="2026-01-23T23:54:18.614148709Z" level=info msg="loading plugin \"io.containerd.service.v1.diff-service\"..." type=io.containerd.service.v1 Jan 23 23:54:18.619645 containerd[2023]: time="2026-01-23T23:54:18.614181025Z" level=info msg="loading plugin \"io.containerd.service.v1.images-service\"..." type=io.containerd.service.v1 Jan 23 23:54:18.619645 containerd[2023]: time="2026-01-23T23:54:18.614213977Z" level=info msg="loading plugin \"io.containerd.service.v1.introspection-service\"..." type=io.containerd.service.v1 Jan 23 23:54:18.619645 containerd[2023]: time="2026-01-23T23:54:18.614247337Z" level=info msg="loading plugin \"io.containerd.service.v1.namespaces-service\"..." type=io.containerd.service.v1 Jan 23 23:54:18.619645 containerd[2023]: time="2026-01-23T23:54:18.614278453Z" level=info msg="loading plugin \"io.containerd.service.v1.snapshots-service\"..." type=io.containerd.service.v1 Jan 23 23:54:18.619645 containerd[2023]: time="2026-01-23T23:54:18.614306137Z" level=info msg="loading plugin \"io.containerd.service.v1.tasks-service\"..." type=io.containerd.service.v1 Jan 23 23:54:18.619645 containerd[2023]: time="2026-01-23T23:54:18.614367961Z" level=info msg="loading plugin \"io.containerd.grpc.v1.containers\"..." type=io.containerd.grpc.v1 Jan 23 23:54:18.619645 containerd[2023]: time="2026-01-23T23:54:18.616644625Z" level=info msg="loading plugin \"io.containerd.grpc.v1.content\"..." type=io.containerd.grpc.v1 Jan 23 23:54:18.619645 containerd[2023]: time="2026-01-23T23:54:18.616696849Z" level=info msg="loading plugin \"io.containerd.grpc.v1.diff\"..." type=io.containerd.grpc.v1 Jan 23 23:54:18.619645 containerd[2023]: time="2026-01-23T23:54:18.616730929Z" level=info msg="loading plugin \"io.containerd.grpc.v1.events\"..." type=io.containerd.grpc.v1 Jan 23 23:54:18.620336 containerd[2023]: time="2026-01-23T23:54:18.616764001Z" level=info msg="loading plugin \"io.containerd.grpc.v1.images\"..." type=io.containerd.grpc.v1 Jan 23 23:54:18.620336 containerd[2023]: time="2026-01-23T23:54:18.616796005Z" level=info msg="loading plugin \"io.containerd.grpc.v1.introspection\"..." type=io.containerd.grpc.v1 Jan 23 23:54:18.620336 containerd[2023]: time="2026-01-23T23:54:18.616824493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.leases\"..." type=io.containerd.grpc.v1 Jan 23 23:54:18.620336 containerd[2023]: time="2026-01-23T23:54:18.616855609Z" level=info msg="loading plugin \"io.containerd.grpc.v1.namespaces\"..." type=io.containerd.grpc.v1 Jan 23 23:54:18.620336 containerd[2023]: time="2026-01-23T23:54:18.616885489Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandbox-controllers\"..." type=io.containerd.grpc.v1 Jan 23 23:54:18.620336 containerd[2023]: time="2026-01-23T23:54:18.616934317Z" level=info msg="loading plugin \"io.containerd.grpc.v1.sandboxes\"..." type=io.containerd.grpc.v1 Jan 23 23:54:18.620336 containerd[2023]: time="2026-01-23T23:54:18.616968181Z" level=info msg="loading plugin \"io.containerd.grpc.v1.snapshots\"..." type=io.containerd.grpc.v1 Jan 23 23:54:18.622656 containerd[2023]: time="2026-01-23T23:54:18.621872881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.streaming\"..." type=io.containerd.grpc.v1 Jan 23 23:54:18.622656 containerd[2023]: time="2026-01-23T23:54:18.622049881Z" level=info msg="loading plugin \"io.containerd.grpc.v1.tasks\"..." type=io.containerd.grpc.v1 Jan 23 23:54:18.622656 containerd[2023]: time="2026-01-23T23:54:18.622112965Z" level=info msg="loading plugin \"io.containerd.transfer.v1.local\"..." type=io.containerd.transfer.v1 Jan 23 23:54:18.622656 containerd[2023]: time="2026-01-23T23:54:18.622196605Z" level=info msg="loading plugin \"io.containerd.grpc.v1.transfer\"..." type=io.containerd.grpc.v1 Jan 23 23:54:18.622656 containerd[2023]: time="2026-01-23T23:54:18.622232137Z" level=info msg="loading plugin \"io.containerd.grpc.v1.version\"..." type=io.containerd.grpc.v1 Jan 23 23:54:18.622656 containerd[2023]: time="2026-01-23T23:54:18.622270933Z" level=info msg="loading plugin \"io.containerd.internal.v1.restart\"..." type=io.containerd.internal.v1 Jan 23 23:54:18.622656 containerd[2023]: time="2026-01-23T23:54:18.622550245Z" level=info msg="loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." type=io.containerd.tracing.processor.v1 Jan 23 23:54:18.622656 containerd[2023]: time="2026-01-23T23:54:18.622628149Z" level=info msg="skip loading plugin \"io.containerd.tracing.processor.v1.otlp\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.tracing.processor.v1 Jan 23 23:54:18.623093 containerd[2023]: time="2026-01-23T23:54:18.622674889Z" level=info msg="loading plugin \"io.containerd.internal.v1.tracing\"..." type=io.containerd.internal.v1 Jan 23 23:54:18.623093 containerd[2023]: time="2026-01-23T23:54:18.622720501Z" level=info msg="skip loading plugin \"io.containerd.internal.v1.tracing\"..." error="skip plugin: tracing endpoint not configured" type=io.containerd.internal.v1 Jan 23 23:54:18.623093 containerd[2023]: time="2026-01-23T23:54:18.622749361Z" level=info msg="loading plugin \"io.containerd.grpc.v1.healthcheck\"..." type=io.containerd.grpc.v1 Jan 23 23:54:18.623093 containerd[2023]: time="2026-01-23T23:54:18.622791253Z" level=info msg="loading plugin \"io.containerd.nri.v1.nri\"..." type=io.containerd.nri.v1 Jan 23 23:54:18.623093 containerd[2023]: time="2026-01-23T23:54:18.622826173Z" level=info msg="NRI interface is disabled by configuration." Jan 23 23:54:18.623093 containerd[2023]: time="2026-01-23T23:54:18.622854493Z" level=info msg="loading plugin \"io.containerd.grpc.v1.cri\"..." type=io.containerd.grpc.v1 Jan 23 23:54:18.633848 containerd[2023]: time="2026-01-23T23:54:18.627866197Z" level=info msg="Start cri plugin with config {PluginConfig:{ContainerdConfig:{Snapshotter:overlayfs DefaultRuntimeName:runc DefaultRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} UntrustedWorkloadRuntime:{Type: Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:} Runtimes:map[runc:{Type:io.containerd.runc.v2 Path: Engine: PodAnnotations:[] ContainerAnnotations:[] Root: Options:map[SystemdCgroup:true] PrivilegedWithoutHostDevices:false PrivilegedWithoutHostDevicesAllDevicesAllowed:false BaseRuntimeSpec: NetworkPluginConfDir: NetworkPluginMaxConfNum:0 Snapshotter: SandboxMode:podsandbox}] NoPivot:false DisableSnapshotAnnotations:true DiscardUnpackedLayers:false IgnoreBlockIONotEnabledErrors:false IgnoreRdtNotEnabledErrors:false} CniConfig:{NetworkPluginBinDir:/opt/cni/bin NetworkPluginConfDir:/etc/cni/net.d NetworkPluginMaxConfNum:1 NetworkPluginSetupSerially:false NetworkPluginConfTemplate: IPPreference:} Registry:{ConfigPath: Mirrors:map[] Configs:map[] Auths:map[] Headers:map[]} ImageDecryption:{KeyModel:node} DisableTCPService:true StreamServerAddress:127.0.0.1 StreamServerPort:0 StreamIdleTimeout:4h0m0s EnableSelinux:true SelinuxCategoryRange:1024 SandboxImage:registry.k8s.io/pause:3.8 StatsCollectPeriod:10 SystemdCgroup:false EnableTLSStreaming:false X509KeyPairStreaming:{TLSCertFile: TLSKeyFile:} MaxContainerLogLineSize:16384 DisableCgroup:false DisableApparmor:false RestrictOOMScoreAdj:false MaxConcurrentDownloads:3 DisableProcMount:false UnsetSeccompProfile: TolerateMissingHugetlbController:true DisableHugetlbController:true DeviceOwnershipFromSecurityContext:false IgnoreImageDefinedVolumes:false NetNSMountsUnderStateDir:false EnableUnprivilegedPorts:false EnableUnprivilegedICMP:false EnableCDI:false CDISpecDirs:[/etc/cdi /var/run/cdi] ImagePullProgressTimeout:5m0s DrainExecSyncIOTimeout:0s ImagePullWithSyncFs:false IgnoreDeprecationWarnings:[]} ContainerdRootDir:/var/lib/containerd ContainerdEndpoint:/run/containerd/containerd.sock RootDir:/var/lib/containerd/io.containerd.grpc.v1.cri StateDir:/run/containerd/io.containerd.grpc.v1.cri}" Jan 23 23:54:18.633848 containerd[2023]: time="2026-01-23T23:54:18.628512613Z" level=info msg="Connect containerd service" Jan 23 23:54:18.633848 containerd[2023]: time="2026-01-23T23:54:18.628582237Z" level=info msg="using legacy CRI server" Jan 23 23:54:18.633848 containerd[2023]: time="2026-01-23T23:54:18.628601581Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 23:54:18.633848 containerd[2023]: time="2026-01-23T23:54:18.628795489Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\"" Jan 23 23:54:18.633848 containerd[2023]: time="2026-01-23T23:54:18.630053581Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 23:54:18.633848 containerd[2023]: time="2026-01-23T23:54:18.632931757Z" level=info msg="Start subscribing containerd event" Jan 23 23:54:18.633848 containerd[2023]: time="2026-01-23T23:54:18.633022789Z" level=info msg="Start recovering state" Jan 23 23:54:18.633848 containerd[2023]: time="2026-01-23T23:54:18.633155113Z" level=info msg="Start event monitor" Jan 23 23:54:18.633848 containerd[2023]: time="2026-01-23T23:54:18.633179617Z" level=info msg="Start snapshots syncer" Jan 23 23:54:18.633848 containerd[2023]: time="2026-01-23T23:54:18.633208909Z" level=info msg="Start cni network conf syncer for default" Jan 23 23:54:18.633848 containerd[2023]: time="2026-01-23T23:54:18.633229285Z" level=info msg="Start streaming server" Jan 23 23:54:18.637632 containerd[2023]: time="2026-01-23T23:54:18.635968261Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 23:54:18.637632 containerd[2023]: time="2026-01-23T23:54:18.636793069Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 23:54:18.645770 containerd[2023]: time="2026-01-23T23:54:18.642532021Z" level=info msg="containerd successfully booted in 0.231677s" Jan 23 23:54:18.642723 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 23:54:18.692683 amazon-ssm-agent[2110]: 2026-01-23 23:54:18 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 23 23:54:18.792802 amazon-ssm-agent[2110]: 2026-01-23 23:54:18 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 23 23:54:18.892992 amazon-ssm-agent[2110]: 2026-01-23 23:54:18 INFO [amazon-ssm-agent] using named pipe channel for IPC Jan 23 23:54:18.992483 amazon-ssm-agent[2110]: 2026-01-23 23:54:18 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.2.0.0 Jan 23 23:54:19.092763 amazon-ssm-agent[2110]: 2026-01-23 23:54:18 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 23 23:54:19.194628 amazon-ssm-agent[2110]: 2026-01-23 23:54:18 INFO [amazon-ssm-agent] Starting Core Agent Jan 23 23:54:19.295654 amazon-ssm-agent[2110]: 2026-01-23 23:54:18 INFO [amazon-ssm-agent] registrar detected. Attempting registration Jan 23 23:54:19.394339 amazon-ssm-agent[2110]: 2026-01-23 23:54:18 INFO [Registrar] Starting registrar module Jan 23 23:54:19.461115 tar[2016]: linux-arm64/README.md Jan 23 23:54:19.497121 amazon-ssm-agent[2110]: 2026-01-23 23:54:18 INFO [EC2Identity] no registration info found for ec2 instance, attempting registration Jan 23 23:54:19.501071 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 23:54:19.912262 amazon-ssm-agent[2110]: 2026-01-23 23:54:19 INFO [EC2Identity] EC2 registration was successful. Jan 23 23:54:19.954248 amazon-ssm-agent[2110]: 2026-01-23 23:54:19 INFO [CredentialRefresher] credentialRefresher has started Jan 23 23:54:19.954248 amazon-ssm-agent[2110]: 2026-01-23 23:54:19 INFO [CredentialRefresher] Starting credentials refresher loop Jan 23 23:54:19.954457 amazon-ssm-agent[2110]: 2026-01-23 23:54:19 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 23 23:54:20.012832 amazon-ssm-agent[2110]: 2026-01-23 23:54:19 INFO [CredentialRefresher] Next credential rotation will be in 31.79999283 minutes Jan 23 23:54:20.175783 ntpd[1992]: Listen normally on 6 eth0 [fe80::425:71ff:fe56:d01b%2]:123 Jan 23 23:54:20.177427 ntpd[1992]: 23 Jan 23:54:20 ntpd[1992]: Listen normally on 6 eth0 [fe80::425:71ff:fe56:d01b%2]:123 Jan 23 23:54:20.513400 sshd_keygen[2024]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 23:54:20.563707 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 23:54:20.574179 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 23:54:20.587380 systemd[1]: Started sshd@0-172.31.18.95:22-4.153.228.146:58132.service - OpenSSH per-connection server daemon (4.153.228.146:58132). Jan 23 23:54:20.607426 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 23:54:20.608914 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 23:54:20.618200 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 23:54:20.664292 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 23:54:20.677444 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 23:54:20.693951 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 23:54:20.697829 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 23:54:20.989956 amazon-ssm-agent[2110]: 2026-01-23 23:54:20 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 23 23:54:21.052122 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:54:21.058922 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 23:54:21.062556 systemd[1]: Startup finished in 1.216s (kernel) + 9.069s (initrd) + 10.224s (userspace) = 20.509s. Jan 23 23:54:21.071666 (kubelet)[2243]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:54:21.090499 amazon-ssm-agent[2110]: 2026-01-23 23:54:20 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2236) started Jan 23 23:54:21.129048 sshd[2224]: Accepted publickey for core from 4.153.228.146 port 58132 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:54:21.136793 sshd[2224]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:21.165866 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 23:54:21.175193 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 23:54:21.183490 systemd-logind[1997]: New session 1 of user core. Jan 23 23:54:21.190649 amazon-ssm-agent[2110]: 2026-01-23 23:54:20 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 23 23:54:21.225241 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 23:54:21.245283 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 23:54:21.263406 (systemd)[2252]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 23:54:21.564854 systemd[2252]: Queued start job for default target default.target. Jan 23 23:54:21.577209 systemd[2252]: Created slice app.slice - User Application Slice. Jan 23 23:54:21.577284 systemd[2252]: Reached target paths.target - Paths. Jan 23 23:54:21.577318 systemd[2252]: Reached target timers.target - Timers. Jan 23 23:54:21.580257 systemd[2252]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 23:54:21.621433 systemd[2252]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 23:54:21.622761 systemd[2252]: Reached target sockets.target - Sockets. Jan 23 23:54:21.623040 systemd[2252]: Reached target basic.target - Basic System. Jan 23 23:54:21.623312 systemd[2252]: Reached target default.target - Main User Target. Jan 23 23:54:21.623394 systemd[2252]: Startup finished in 338ms. Jan 23 23:54:21.623582 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 23:54:21.634968 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 23:54:22.007158 systemd[1]: Started sshd@1-172.31.18.95:22-4.153.228.146:58140.service - OpenSSH per-connection server daemon (4.153.228.146:58140). Jan 23 23:54:22.188964 kubelet[2243]: E0123 23:54:22.188879 2243 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:54:22.192773 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:54:22.193099 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:54:22.194126 systemd[1]: kubelet.service: Consumed 1.315s CPU time. Jan 23 23:54:22.516532 sshd[2272]: Accepted publickey for core from 4.153.228.146 port 58140 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:54:22.519495 sshd[2272]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:22.529509 systemd-logind[1997]: New session 2 of user core. Jan 23 23:54:22.540965 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 23:54:22.875193 sshd[2272]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:22.883086 systemd-logind[1997]: Session 2 logged out. Waiting for processes to exit. Jan 23 23:54:22.884805 systemd[1]: sshd@1-172.31.18.95:22-4.153.228.146:58140.service: Deactivated successfully. Jan 23 23:54:22.888283 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 23:54:22.890201 systemd-logind[1997]: Removed session 2. Jan 23 23:54:22.969169 systemd[1]: Started sshd@2-172.31.18.95:22-4.153.228.146:58146.service - OpenSSH per-connection server daemon (4.153.228.146:58146). Jan 23 23:54:23.470528 sshd[2281]: Accepted publickey for core from 4.153.228.146 port 58146 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:54:23.473400 sshd[2281]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:23.483092 systemd-logind[1997]: New session 3 of user core. Jan 23 23:54:23.490945 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 23:54:23.817993 sshd[2281]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:23.823492 systemd[1]: sshd@2-172.31.18.95:22-4.153.228.146:58146.service: Deactivated successfully. Jan 23 23:54:23.823979 systemd-logind[1997]: Session 3 logged out. Waiting for processes to exit. Jan 23 23:54:23.827398 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 23:54:23.832442 systemd-logind[1997]: Removed session 3. Jan 23 23:54:23.910122 systemd[1]: Started sshd@3-172.31.18.95:22-4.153.228.146:58158.service - OpenSSH per-connection server daemon (4.153.228.146:58158). Jan 23 23:54:23.844115 systemd-resolved[1936]: Clock change detected. Flushing caches. Jan 23 23:54:23.850981 systemd-journald[1569]: Time jumped backwards, rotating. Jan 23 23:54:24.088658 sshd[2288]: Accepted publickey for core from 4.153.228.146 port 58158 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:54:24.091313 sshd[2288]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:24.098568 systemd-logind[1997]: New session 4 of user core. Jan 23 23:54:24.110156 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 23:54:24.449234 sshd[2288]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:24.454410 systemd[1]: sshd@3-172.31.18.95:22-4.153.228.146:58158.service: Deactivated successfully. Jan 23 23:54:24.456850 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 23:54:24.461734 systemd-logind[1997]: Session 4 logged out. Waiting for processes to exit. Jan 23 23:54:24.463544 systemd-logind[1997]: Removed session 4. Jan 23 23:54:24.546408 systemd[1]: Started sshd@4-172.31.18.95:22-4.153.228.146:44322.service - OpenSSH per-connection server daemon (4.153.228.146:44322). Jan 23 23:54:25.035867 sshd[2296]: Accepted publickey for core from 4.153.228.146 port 44322 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:54:25.038500 sshd[2296]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:25.045988 systemd-logind[1997]: New session 5 of user core. Jan 23 23:54:25.058150 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 23:54:25.328522 sudo[2299]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 23:54:25.329218 sudo[2299]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:54:25.344471 sudo[2299]: pam_unix(sudo:session): session closed for user root Jan 23 23:54:25.421750 sshd[2296]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:25.427019 systemd[1]: sshd@4-172.31.18.95:22-4.153.228.146:44322.service: Deactivated successfully. Jan 23 23:54:25.430381 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 23:54:25.435448 systemd-logind[1997]: Session 5 logged out. Waiting for processes to exit. Jan 23 23:54:25.437248 systemd-logind[1997]: Removed session 5. Jan 23 23:54:25.530418 systemd[1]: Started sshd@5-172.31.18.95:22-4.153.228.146:44330.service - OpenSSH per-connection server daemon (4.153.228.146:44330). Jan 23 23:54:26.059214 sshd[2304]: Accepted publickey for core from 4.153.228.146 port 44330 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:54:26.061990 sshd[2304]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:26.069437 systemd-logind[1997]: New session 6 of user core. Jan 23 23:54:26.080152 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 23:54:26.355127 sudo[2308]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 23:54:26.356289 sudo[2308]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:54:26.362644 sudo[2308]: pam_unix(sudo:session): session closed for user root Jan 23 23:54:26.372461 sudo[2307]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/bin/systemctl restart audit-rules Jan 23 23:54:26.373165 sudo[2307]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:54:26.395404 systemd[1]: Stopping audit-rules.service - Load Security Auditing Rules... Jan 23 23:54:26.410760 auditctl[2311]: No rules Jan 23 23:54:26.411594 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 23:54:26.412009 systemd[1]: Stopped audit-rules.service - Load Security Auditing Rules. Jan 23 23:54:26.418540 systemd[1]: Starting audit-rules.service - Load Security Auditing Rules... Jan 23 23:54:26.470632 augenrules[2329]: No rules Jan 23 23:54:26.473110 systemd[1]: Finished audit-rules.service - Load Security Auditing Rules. Jan 23 23:54:26.475547 sudo[2307]: pam_unix(sudo:session): session closed for user root Jan 23 23:54:26.559234 sshd[2304]: pam_unix(sshd:session): session closed for user core Jan 23 23:54:26.565206 systemd-logind[1997]: Session 6 logged out. Waiting for processes to exit. Jan 23 23:54:26.566763 systemd[1]: sshd@5-172.31.18.95:22-4.153.228.146:44330.service: Deactivated successfully. Jan 23 23:54:26.569617 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 23:54:26.571297 systemd-logind[1997]: Removed session 6. Jan 23 23:54:26.659417 systemd[1]: Started sshd@6-172.31.18.95:22-4.153.228.146:44344.service - OpenSSH per-connection server daemon (4.153.228.146:44344). Jan 23 23:54:27.197281 sshd[2337]: Accepted publickey for core from 4.153.228.146 port 44344 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:54:27.199923 sshd[2337]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:54:27.208988 systemd-logind[1997]: New session 7 of user core. Jan 23 23:54:27.217196 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 23:54:27.493745 sudo[2340]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 23:54:27.494416 sudo[2340]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 23:54:28.173333 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 23:54:28.173623 (dockerd)[2357]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 23:54:28.709043 dockerd[2357]: time="2026-01-23T23:54:28.708946000Z" level=info msg="Starting up" Jan 23 23:54:28.917960 dockerd[2357]: time="2026-01-23T23:54:28.917717693Z" level=info msg="Loading containers: start." Jan 23 23:54:29.121966 kernel: Initializing XFRM netlink socket Jan 23 23:54:29.185452 (udev-worker)[2379]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:54:29.287370 systemd-networkd[1935]: docker0: Link UP Jan 23 23:54:29.320353 dockerd[2357]: time="2026-01-23T23:54:29.320299539Z" level=info msg="Loading containers: done." Jan 23 23:54:29.349980 dockerd[2357]: time="2026-01-23T23:54:29.349803771Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 23:54:29.350803 dockerd[2357]: time="2026-01-23T23:54:29.350083671Z" level=info msg="Docker daemon" commit=061aa95809be396a6b5542618d8a34b02a21ff77 containerd-snapshotter=false storage-driver=overlay2 version=26.1.0 Jan 23 23:54:29.350803 dockerd[2357]: time="2026-01-23T23:54:29.350272563Z" level=info msg="Daemon has completed initialization" Jan 23 23:54:29.425423 dockerd[2357]: time="2026-01-23T23:54:29.424966479Z" level=info msg="API listen on /run/docker.sock" Jan 23 23:54:29.427620 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 23:54:31.534547 containerd[2023]: time="2026-01-23T23:54:31.534478626Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 23 23:54:31.960770 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 23:54:31.968432 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:54:32.203954 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2840656682.mount: Deactivated successfully. Jan 23 23:54:32.415223 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:54:32.433028 (kubelet)[2518]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:54:32.536501 kubelet[2518]: E0123 23:54:32.536266 2518 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:54:32.549167 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:54:32.549492 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:54:33.713391 containerd[2023]: time="2026-01-23T23:54:33.713292357Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:33.715710 containerd[2023]: time="2026-01-23T23:54:33.715630053Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=24571040" Jan 23 23:54:33.717979 containerd[2023]: time="2026-01-23T23:54:33.717870777Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:33.725919 containerd[2023]: time="2026-01-23T23:54:33.724598829Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:33.730740 containerd[2023]: time="2026-01-23T23:54:33.730655001Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 2.196089003s" Jan 23 23:54:33.730975 containerd[2023]: time="2026-01-23T23:54:33.730941813Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Jan 23 23:54:33.734206 containerd[2023]: time="2026-01-23T23:54:33.734130585Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 23 23:54:35.040571 containerd[2023]: time="2026-01-23T23:54:35.040486987Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:35.042219 containerd[2023]: time="2026-01-23T23:54:35.042143035Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19135477" Jan 23 23:54:35.044109 containerd[2023]: time="2026-01-23T23:54:35.043118215Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:35.050481 containerd[2023]: time="2026-01-23T23:54:35.050427847Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:35.053005 containerd[2023]: time="2026-01-23T23:54:35.052937803Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 1.318731774s" Jan 23 23:54:35.053005 containerd[2023]: time="2026-01-23T23:54:35.052998043Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Jan 23 23:54:35.053705 containerd[2023]: time="2026-01-23T23:54:35.053653039Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 23 23:54:36.096718 containerd[2023]: time="2026-01-23T23:54:36.096642800Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:36.099633 containerd[2023]: time="2026-01-23T23:54:36.099551504Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=14191716" Jan 23 23:54:36.102161 containerd[2023]: time="2026-01-23T23:54:36.102067868Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:36.110039 containerd[2023]: time="2026-01-23T23:54:36.109987076Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:36.112436 containerd[2023]: time="2026-01-23T23:54:36.112215752Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 1.058504969s" Jan 23 23:54:36.112436 containerd[2023]: time="2026-01-23T23:54:36.112292804Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Jan 23 23:54:36.114360 containerd[2023]: time="2026-01-23T23:54:36.114304208Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 23 23:54:37.332910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3335189914.mount: Deactivated successfully. Jan 23 23:54:37.764450 containerd[2023]: time="2026-01-23T23:54:37.764380693Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:37.766137 containerd[2023]: time="2026-01-23T23:54:37.765860665Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=22805253" Jan 23 23:54:37.767976 containerd[2023]: time="2026-01-23T23:54:37.767916973Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:37.773207 containerd[2023]: time="2026-01-23T23:54:37.773141065Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:37.775279 containerd[2023]: time="2026-01-23T23:54:37.774845329Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.660475349s" Jan 23 23:54:37.775279 containerd[2023]: time="2026-01-23T23:54:37.774970285Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Jan 23 23:54:37.776405 containerd[2023]: time="2026-01-23T23:54:37.775797025Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 23 23:54:38.334231 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount540102887.mount: Deactivated successfully. Jan 23 23:54:39.557741 containerd[2023]: time="2026-01-23T23:54:39.557659430Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:39.560004 containerd[2023]: time="2026-01-23T23:54:39.559950590Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Jan 23 23:54:39.561909 containerd[2023]: time="2026-01-23T23:54:39.560421518Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:39.567812 containerd[2023]: time="2026-01-23T23:54:39.567722006Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:39.575989 containerd[2023]: time="2026-01-23T23:54:39.575883854Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.800025773s" Jan 23 23:54:39.576161 containerd[2023]: time="2026-01-23T23:54:39.576130946Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Jan 23 23:54:39.576905 containerd[2023]: time="2026-01-23T23:54:39.576839594Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 23 23:54:40.054621 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1102378324.mount: Deactivated successfully. Jan 23 23:54:40.061939 containerd[2023]: time="2026-01-23T23:54:40.061496508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:40.063918 containerd[2023]: time="2026-01-23T23:54:40.062918196Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Jan 23 23:54:40.064703 containerd[2023]: time="2026-01-23T23:54:40.064661436Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:40.068941 containerd[2023]: time="2026-01-23T23:54:40.068849688Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:40.070911 containerd[2023]: time="2026-01-23T23:54:40.070842612Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 493.94477ms" Jan 23 23:54:40.071088 containerd[2023]: time="2026-01-23T23:54:40.071054604Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Jan 23 23:54:40.071967 containerd[2023]: time="2026-01-23T23:54:40.071922312Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 23 23:54:40.613879 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1826623342.mount: Deactivated successfully. Jan 23 23:54:42.711444 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 23:54:42.718241 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:54:43.766182 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:54:43.782462 (kubelet)[2703]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 23:54:43.926646 kubelet[2703]: E0123 23:54:43.926310 2703 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 23:54:43.932502 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 23:54:43.933835 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 23:54:44.380050 containerd[2023]: time="2026-01-23T23:54:44.379965894Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:44.408699 containerd[2023]: time="2026-01-23T23:54:44.408630246Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=98062987" Jan 23 23:54:44.418802 containerd[2023]: time="2026-01-23T23:54:44.418701930Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:44.426733 containerd[2023]: time="2026-01-23T23:54:44.426623442Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:54:44.430943 containerd[2023]: time="2026-01-23T23:54:44.430844118Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 4.358865598s" Jan 23 23:54:44.431086 containerd[2023]: time="2026-01-23T23:54:44.430945818Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Jan 23 23:54:47.831820 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 23:54:53.202635 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:54:53.211434 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:54:53.270566 systemd[1]: Reloading requested from client PID 2743 ('systemctl') (unit session-7.scope)... Jan 23 23:54:53.270600 systemd[1]: Reloading... Jan 23 23:54:53.528009 zram_generator::config[2786]: No configuration found. Jan 23 23:54:53.769375 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:54:53.942648 systemd[1]: Reloading finished in 671 ms. Jan 23 23:54:54.046462 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:54:54.055806 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:54:54.061069 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 23:54:54.062982 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:54:54.068525 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:54:54.394932 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:54:54.412431 (kubelet)[2848]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:54:54.487170 kubelet[2848]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:54:54.487637 kubelet[2848]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:54:54.488518 kubelet[2848]: I0123 23:54:54.488436 2848 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:54:55.676925 kubelet[2848]: I0123 23:54:55.676843 2848 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 23:54:55.676925 kubelet[2848]: I0123 23:54:55.676932 2848 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:54:55.679460 kubelet[2848]: I0123 23:54:55.679409 2848 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 23:54:55.679460 kubelet[2848]: I0123 23:54:55.679451 2848 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:54:55.679911 kubelet[2848]: I0123 23:54:55.679843 2848 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 23:54:55.691437 kubelet[2848]: E0123 23:54:55.691345 2848 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.18.95:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.95:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 23:54:55.692409 kubelet[2848]: I0123 23:54:55.691948 2848 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:54:55.697021 kubelet[2848]: E0123 23:54:55.696975 2848 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:54:55.697356 kubelet[2848]: I0123 23:54:55.697330 2848 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 23 23:54:55.703561 kubelet[2848]: I0123 23:54:55.703529 2848 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 23:54:55.704262 kubelet[2848]: I0123 23:54:55.704216 2848 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:54:55.704619 kubelet[2848]: I0123 23:54:55.704365 2848 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-95","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 23:54:55.705159 kubelet[2848]: I0123 23:54:55.704800 2848 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:54:55.705159 kubelet[2848]: I0123 23:54:55.704827 2848 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 23:54:55.705159 kubelet[2848]: I0123 23:54:55.705032 2848 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 23:54:55.710433 kubelet[2848]: I0123 23:54:55.710379 2848 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:54:55.713312 kubelet[2848]: I0123 23:54:55.713271 2848 kubelet.go:475] "Attempting to sync node with API server" Jan 23 23:54:55.715941 kubelet[2848]: I0123 23:54:55.714811 2848 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:54:55.715941 kubelet[2848]: I0123 23:54:55.714909 2848 kubelet.go:387] "Adding apiserver pod source" Jan 23 23:54:55.715941 kubelet[2848]: I0123 23:54:55.714934 2848 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:54:55.717390 kubelet[2848]: E0123 23:54:55.717318 2848 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.18.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 23:54:55.718051 kubelet[2848]: E0123 23:54:55.717980 2848 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.18.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-95&limit=500&resourceVersion=0\": dial tcp 172.31.18.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 23:54:55.718554 kubelet[2848]: I0123 23:54:55.718514 2848 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:54:55.719709 kubelet[2848]: I0123 23:54:55.719657 2848 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 23:54:55.719810 kubelet[2848]: I0123 23:54:55.719723 2848 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 23:54:55.719810 kubelet[2848]: W0123 23:54:55.719790 2848 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 23:54:55.724446 kubelet[2848]: I0123 23:54:55.724351 2848 server.go:1262] "Started kubelet" Jan 23 23:54:55.730651 kubelet[2848]: I0123 23:54:55.730599 2848 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:54:55.731177 kubelet[2848]: I0123 23:54:55.731094 2848 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:54:55.731281 kubelet[2848]: I0123 23:54:55.731193 2848 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 23:54:55.731709 kubelet[2848]: I0123 23:54:55.731663 2848 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:54:55.732727 kubelet[2848]: I0123 23:54:55.732688 2848 server.go:310] "Adding debug handlers to kubelet server" Jan 23 23:54:55.736123 kubelet[2848]: I0123 23:54:55.736085 2848 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:54:55.739945 kubelet[2848]: E0123 23:54:55.737434 2848 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.95:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.95:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-95.188d816423d42666 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-95,UID:ip-172-31-18-95,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-95,},FirstTimestamp:2026-01-23 23:54:55.724291686 +0000 UTC m=+1.305800456,LastTimestamp:2026-01-23 23:54:55.724291686 +0000 UTC m=+1.305800456,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-95,}" Jan 23 23:54:55.741831 kubelet[2848]: I0123 23:54:55.741767 2848 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:54:55.747697 kubelet[2848]: E0123 23:54:55.746689 2848 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-18-95\" not found" Jan 23 23:54:55.747697 kubelet[2848]: I0123 23:54:55.746749 2848 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 23:54:55.748152 kubelet[2848]: I0123 23:54:55.748016 2848 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 23:54:55.748152 kubelet[2848]: I0123 23:54:55.748113 2848 reconciler.go:29] "Reconciler: start to sync state" Jan 23 23:54:55.749001 kubelet[2848]: E0123 23:54:55.748836 2848 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.18.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 23:54:55.754370 kubelet[2848]: I0123 23:54:55.754310 2848 factory.go:223] Registration of the containerd container factory successfully Jan 23 23:54:55.754370 kubelet[2848]: I0123 23:54:55.754356 2848 factory.go:223] Registration of the systemd container factory successfully Jan 23 23:54:55.754586 kubelet[2848]: I0123 23:54:55.754509 2848 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:54:55.767443 kubelet[2848]: E0123 23:54:55.767395 2848 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 23:54:55.777138 kubelet[2848]: I0123 23:54:55.776979 2848 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 23:54:55.779506 kubelet[2848]: I0123 23:54:55.779434 2848 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 23:54:55.779506 kubelet[2848]: I0123 23:54:55.779481 2848 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 23:54:55.779697 kubelet[2848]: I0123 23:54:55.779528 2848 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 23:54:55.779697 kubelet[2848]: E0123 23:54:55.779608 2848 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:54:55.786220 kubelet[2848]: E0123 23:54:55.786151 2848 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-95?timeout=10s\": dial tcp 172.31.18.95:6443: connect: connection refused" interval="200ms" Jan 23 23:54:55.789425 kubelet[2848]: E0123 23:54:55.788722 2848 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.18.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 23:54:55.800874 kubelet[2848]: I0123 23:54:55.800829 2848 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:54:55.800874 kubelet[2848]: I0123 23:54:55.800862 2848 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:54:55.801119 kubelet[2848]: I0123 23:54:55.800943 2848 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:54:55.804474 kubelet[2848]: I0123 23:54:55.804433 2848 policy_none.go:49] "None policy: Start" Jan 23 23:54:55.804474 kubelet[2848]: I0123 23:54:55.804475 2848 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 23:54:55.804683 kubelet[2848]: I0123 23:54:55.804499 2848 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 23:54:55.806090 kubelet[2848]: I0123 23:54:55.806054 2848 policy_none.go:47] "Start" Jan 23 23:54:55.814559 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 23:54:55.837635 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 23:54:55.849961 kubelet[2848]: E0123 23:54:55.848582 2848 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-18-95\" not found" Jan 23 23:54:55.852395 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 23:54:55.855952 kubelet[2848]: E0123 23:54:55.855860 2848 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 23:54:55.857049 kubelet[2848]: I0123 23:54:55.857020 2848 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:54:55.857284 kubelet[2848]: I0123 23:54:55.857216 2848 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:54:55.859092 kubelet[2848]: I0123 23:54:55.859061 2848 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:54:55.859595 kubelet[2848]: E0123 23:54:55.859555 2848 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:54:55.859698 kubelet[2848]: E0123 23:54:55.859621 2848 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-95\" not found" Jan 23 23:54:55.899331 systemd[1]: Created slice kubepods-burstable-pod7e9e086aa157f75e5e23241a4bb090f6.slice - libcontainer container kubepods-burstable-pod7e9e086aa157f75e5e23241a4bb090f6.slice. Jan 23 23:54:55.917716 kubelet[2848]: E0123 23:54:55.917564 2848 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-95\" not found" node="ip-172-31-18-95" Jan 23 23:54:55.923817 systemd[1]: Created slice kubepods-burstable-pod16e644c00c0d287f45a24d50f220ec50.slice - libcontainer container kubepods-burstable-pod16e644c00c0d287f45a24d50f220ec50.slice. Jan 23 23:54:55.941345 kubelet[2848]: E0123 23:54:55.937427 2848 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-95\" not found" node="ip-172-31-18-95" Jan 23 23:54:55.948699 systemd[1]: Created slice kubepods-burstable-podb25c94555dceca972166e11e1bee55c8.slice - libcontainer container kubepods-burstable-podb25c94555dceca972166e11e1bee55c8.slice. Jan 23 23:54:55.958190 kubelet[2848]: E0123 23:54:55.958132 2848 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-95\" not found" node="ip-172-31-18-95" Jan 23 23:54:55.962702 kubelet[2848]: I0123 23:54:55.961974 2848 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-95" Jan 23 23:54:55.962702 kubelet[2848]: E0123 23:54:55.962630 2848 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.95:6443/api/v1/nodes\": dial tcp 172.31.18.95:6443: connect: connection refused" node="ip-172-31-18-95" Jan 23 23:54:55.987215 kubelet[2848]: E0123 23:54:55.987158 2848 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-95?timeout=10s\": dial tcp 172.31.18.95:6443: connect: connection refused" interval="400ms" Jan 23 23:54:56.050208 kubelet[2848]: I0123 23:54:56.049773 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7e9e086aa157f75e5e23241a4bb090f6-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-95\" (UID: \"7e9e086aa157f75e5e23241a4bb090f6\") " pod="kube-system/kube-controller-manager-ip-172-31-18-95" Jan 23 23:54:56.050208 kubelet[2848]: I0123 23:54:56.049832 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/16e644c00c0d287f45a24d50f220ec50-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-95\" (UID: \"16e644c00c0d287f45a24d50f220ec50\") " pod="kube-system/kube-scheduler-ip-172-31-18-95" Jan 23 23:54:56.050208 kubelet[2848]: I0123 23:54:56.049873 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b25c94555dceca972166e11e1bee55c8-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-95\" (UID: \"b25c94555dceca972166e11e1bee55c8\") " pod="kube-system/kube-apiserver-ip-172-31-18-95" Jan 23 23:54:56.050208 kubelet[2848]: I0123 23:54:56.049948 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7e9e086aa157f75e5e23241a4bb090f6-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-95\" (UID: \"7e9e086aa157f75e5e23241a4bb090f6\") " pod="kube-system/kube-controller-manager-ip-172-31-18-95" Jan 23 23:54:56.050208 kubelet[2848]: I0123 23:54:56.049991 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7e9e086aa157f75e5e23241a4bb090f6-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-95\" (UID: \"7e9e086aa157f75e5e23241a4bb090f6\") " pod="kube-system/kube-controller-manager-ip-172-31-18-95" Jan 23 23:54:56.050564 kubelet[2848]: I0123 23:54:56.050027 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e9e086aa157f75e5e23241a4bb090f6-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-95\" (UID: \"7e9e086aa157f75e5e23241a4bb090f6\") " pod="kube-system/kube-controller-manager-ip-172-31-18-95" Jan 23 23:54:56.050564 kubelet[2848]: I0123 23:54:56.050076 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b25c94555dceca972166e11e1bee55c8-ca-certs\") pod \"kube-apiserver-ip-172-31-18-95\" (UID: \"b25c94555dceca972166e11e1bee55c8\") " pod="kube-system/kube-apiserver-ip-172-31-18-95" Jan 23 23:54:56.050564 kubelet[2848]: I0123 23:54:56.050118 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b25c94555dceca972166e11e1bee55c8-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-95\" (UID: \"b25c94555dceca972166e11e1bee55c8\") " pod="kube-system/kube-apiserver-ip-172-31-18-95" Jan 23 23:54:56.050564 kubelet[2848]: I0123 23:54:56.050151 2848 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7e9e086aa157f75e5e23241a4bb090f6-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-95\" (UID: \"7e9e086aa157f75e5e23241a4bb090f6\") " pod="kube-system/kube-controller-manager-ip-172-31-18-95" Jan 23 23:54:56.165412 kubelet[2848]: I0123 23:54:56.165073 2848 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-95" Jan 23 23:54:56.165920 kubelet[2848]: E0123 23:54:56.165832 2848 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.95:6443/api/v1/nodes\": dial tcp 172.31.18.95:6443: connect: connection refused" node="ip-172-31-18-95" Jan 23 23:54:56.222431 containerd[2023]: time="2026-01-23T23:54:56.222357064Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-95,Uid:7e9e086aa157f75e5e23241a4bb090f6,Namespace:kube-system,Attempt:0,}" Jan 23 23:54:56.242356 containerd[2023]: time="2026-01-23T23:54:56.242297068Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-95,Uid:16e644c00c0d287f45a24d50f220ec50,Namespace:kube-system,Attempt:0,}" Jan 23 23:54:56.262013 containerd[2023]: time="2026-01-23T23:54:56.261937073Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-95,Uid:b25c94555dceca972166e11e1bee55c8,Namespace:kube-system,Attempt:0,}" Jan 23 23:54:56.387820 kubelet[2848]: E0123 23:54:56.387731 2848 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-95?timeout=10s\": dial tcp 172.31.18.95:6443: connect: connection refused" interval="800ms" Jan 23 23:54:56.569496 kubelet[2848]: I0123 23:54:56.569295 2848 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-95" Jan 23 23:54:56.569943 kubelet[2848]: E0123 23:54:56.569815 2848 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.95:6443/api/v1/nodes\": dial tcp 172.31.18.95:6443: connect: connection refused" node="ip-172-31-18-95" Jan 23 23:54:56.693700 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount259254484.mount: Deactivated successfully. Jan 23 23:54:56.705024 containerd[2023]: time="2026-01-23T23:54:56.704426683Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:54:56.711399 containerd[2023]: time="2026-01-23T23:54:56.711325135Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=269173" Jan 23 23:54:56.714075 containerd[2023]: time="2026-01-23T23:54:56.713243383Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:54:56.716270 containerd[2023]: time="2026-01-23T23:54:56.716074315Z" level=info msg="ImageCreate event name:\"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:54:56.718681 containerd[2023]: time="2026-01-23T23:54:56.718619407Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:54:56.720533 containerd[2023]: time="2026-01-23T23:54:56.720481867Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:54:56.721620 containerd[2023]: time="2026-01-23T23:54:56.721529299Z" level=info msg="stop pulling image registry.k8s.io/pause:3.8: active requests=0, bytes read=0" Jan 23 23:54:56.727077 containerd[2023]: time="2026-01-23T23:54:56.726961699Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 23:54:56.732014 containerd[2023]: time="2026-01-23T23:54:56.731291515Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 488.875263ms" Jan 23 23:54:56.735304 containerd[2023]: time="2026-01-23T23:54:56.735226171Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 512.728791ms" Jan 23 23:54:56.736912 containerd[2023]: time="2026-01-23T23:54:56.736830931Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.8\" with image id \"sha256:4e42fb3c9d90ed7895bc04a9d96fe3102a65b521f485cc5a4f3dd818afef9cef\", repo tag \"registry.k8s.io/pause:3.8\", repo digest \"registry.k8s.io/pause@sha256:9001185023633d17a2f98ff69b6ff2615b8ea02a825adffa40422f51dfdcde9d\", size \"268403\" in 474.772574ms" Jan 23 23:54:56.922001 containerd[2023]: time="2026-01-23T23:54:56.920873912Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:54:56.923017 containerd[2023]: time="2026-01-23T23:54:56.922658552Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:54:56.923200 containerd[2023]: time="2026-01-23T23:54:56.923122484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:56.925143 containerd[2023]: time="2026-01-23T23:54:56.924928916Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:56.930056 containerd[2023]: time="2026-01-23T23:54:56.928747424Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:54:56.930056 containerd[2023]: time="2026-01-23T23:54:56.928860380Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:54:56.930056 containerd[2023]: time="2026-01-23T23:54:56.928934048Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:56.930056 containerd[2023]: time="2026-01-23T23:54:56.929096552Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:56.932469 containerd[2023]: time="2026-01-23T23:54:56.931782728Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:54:56.932469 containerd[2023]: time="2026-01-23T23:54:56.931870532Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:54:56.932469 containerd[2023]: time="2026-01-23T23:54:56.931929740Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:56.932469 containerd[2023]: time="2026-01-23T23:54:56.932098640Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:54:56.950182 kubelet[2848]: E0123 23:54:56.949855 2848 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.18.95:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 23:54:56.986605 systemd[1]: Started cri-containerd-42cc847ae7ed497b8d50a81e6428a83803bddbb24163d28d5bcb3ab03378c226.scope - libcontainer container 42cc847ae7ed497b8d50a81e6428a83803bddbb24163d28d5bcb3ab03378c226. Jan 23 23:54:57.002259 systemd[1]: Started cri-containerd-35128c041a1f7cc37cfe1590f636ca55f882c443340e8c93d832c7b9d615c7e6.scope - libcontainer container 35128c041a1f7cc37cfe1590f636ca55f882c443340e8c93d832c7b9d615c7e6. Jan 23 23:54:57.007516 systemd[1]: Started cri-containerd-453d459784bf48f42929e710338c7e36a13fd2884581d665b6ee0fb13f0579ca.scope - libcontainer container 453d459784bf48f42929e710338c7e36a13fd2884581d665b6ee0fb13f0579ca. Jan 23 23:54:57.054724 kubelet[2848]: E0123 23:54:57.054651 2848 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.18.95:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 23:54:57.124881 containerd[2023]: time="2026-01-23T23:54:57.124815761Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-95,Uid:b25c94555dceca972166e11e1bee55c8,Namespace:kube-system,Attempt:0,} returns sandbox id \"42cc847ae7ed497b8d50a81e6428a83803bddbb24163d28d5bcb3ab03378c226\"" Jan 23 23:54:57.128686 containerd[2023]: time="2026-01-23T23:54:57.128539469Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-95,Uid:16e644c00c0d287f45a24d50f220ec50,Namespace:kube-system,Attempt:0,} returns sandbox id \"35128c041a1f7cc37cfe1590f636ca55f882c443340e8c93d832c7b9d615c7e6\"" Jan 23 23:54:57.145284 containerd[2023]: time="2026-01-23T23:54:57.145130717Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-95,Uid:7e9e086aa157f75e5e23241a4bb090f6,Namespace:kube-system,Attempt:0,} returns sandbox id \"453d459784bf48f42929e710338c7e36a13fd2884581d665b6ee0fb13f0579ca\"" Jan 23 23:54:57.145629 containerd[2023]: time="2026-01-23T23:54:57.145351073Z" level=info msg="CreateContainer within sandbox \"42cc847ae7ed497b8d50a81e6428a83803bddbb24163d28d5bcb3ab03378c226\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 23:54:57.147428 containerd[2023]: time="2026-01-23T23:54:57.147372461Z" level=info msg="CreateContainer within sandbox \"35128c041a1f7cc37cfe1590f636ca55f882c443340e8c93d832c7b9d615c7e6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 23:54:57.156829 containerd[2023]: time="2026-01-23T23:54:57.156644057Z" level=info msg="CreateContainer within sandbox \"453d459784bf48f42929e710338c7e36a13fd2884581d665b6ee0fb13f0579ca\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 23:54:57.189343 kubelet[2848]: E0123 23:54:57.188834 2848 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-95?timeout=10s\": dial tcp 172.31.18.95:6443: connect: connection refused" interval="1.6s" Jan 23 23:54:57.205684 containerd[2023]: time="2026-01-23T23:54:57.205415021Z" level=info msg="CreateContainer within sandbox \"35128c041a1f7cc37cfe1590f636ca55f882c443340e8c93d832c7b9d615c7e6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"3819ca16411f9640f908028776e41413ee4538d7e6447472fcf2023038829300\"" Jan 23 23:54:57.208117 containerd[2023]: time="2026-01-23T23:54:57.208039529Z" level=info msg="CreateContainer within sandbox \"42cc847ae7ed497b8d50a81e6428a83803bddbb24163d28d5bcb3ab03378c226\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a4942e32a32faab1051c4ee9f5b204b9cb7c58b6fa7f0cd01c15b45745456376\"" Jan 23 23:54:57.208447 containerd[2023]: time="2026-01-23T23:54:57.208390481Z" level=info msg="StartContainer for \"3819ca16411f9640f908028776e41413ee4538d7e6447472fcf2023038829300\"" Jan 23 23:54:57.214220 containerd[2023]: time="2026-01-23T23:54:57.214157117Z" level=info msg="CreateContainer within sandbox \"453d459784bf48f42929e710338c7e36a13fd2884581d665b6ee0fb13f0579ca\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"533679ca15875edc9e7aaeb66de13bb43cbf569175efbdde21f665d53df05e56\"" Jan 23 23:54:57.216013 containerd[2023]: time="2026-01-23T23:54:57.215259245Z" level=info msg="StartContainer for \"533679ca15875edc9e7aaeb66de13bb43cbf569175efbdde21f665d53df05e56\"" Jan 23 23:54:57.226263 containerd[2023]: time="2026-01-23T23:54:57.226201493Z" level=info msg="StartContainer for \"a4942e32a32faab1051c4ee9f5b204b9cb7c58b6fa7f0cd01c15b45745456376\"" Jan 23 23:54:57.257861 kubelet[2848]: E0123 23:54:57.257799 2848 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.18.95:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 23:54:57.269869 kubelet[2848]: E0123 23:54:57.269802 2848 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.18.95:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-95&limit=500&resourceVersion=0\": dial tcp 172.31.18.95:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 23:54:57.278224 systemd[1]: Started cri-containerd-3819ca16411f9640f908028776e41413ee4538d7e6447472fcf2023038829300.scope - libcontainer container 3819ca16411f9640f908028776e41413ee4538d7e6447472fcf2023038829300. Jan 23 23:54:57.302252 systemd[1]: Started cri-containerd-533679ca15875edc9e7aaeb66de13bb43cbf569175efbdde21f665d53df05e56.scope - libcontainer container 533679ca15875edc9e7aaeb66de13bb43cbf569175efbdde21f665d53df05e56. Jan 23 23:54:57.317240 systemd[1]: Started cri-containerd-a4942e32a32faab1051c4ee9f5b204b9cb7c58b6fa7f0cd01c15b45745456376.scope - libcontainer container a4942e32a32faab1051c4ee9f5b204b9cb7c58b6fa7f0cd01c15b45745456376. Jan 23 23:54:57.376335 kubelet[2848]: I0123 23:54:57.376257 2848 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-95" Jan 23 23:54:57.379747 kubelet[2848]: E0123 23:54:57.379357 2848 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.95:6443/api/v1/nodes\": dial tcp 172.31.18.95:6443: connect: connection refused" node="ip-172-31-18-95" Jan 23 23:54:57.399416 containerd[2023]: time="2026-01-23T23:54:57.399345258Z" level=info msg="StartContainer for \"3819ca16411f9640f908028776e41413ee4538d7e6447472fcf2023038829300\" returns successfully" Jan 23 23:54:57.448079 containerd[2023]: time="2026-01-23T23:54:57.447766734Z" level=info msg="StartContainer for \"533679ca15875edc9e7aaeb66de13bb43cbf569175efbdde21f665d53df05e56\" returns successfully" Jan 23 23:54:57.465119 containerd[2023]: time="2026-01-23T23:54:57.464867682Z" level=info msg="StartContainer for \"a4942e32a32faab1051c4ee9f5b204b9cb7c58b6fa7f0cd01c15b45745456376\" returns successfully" Jan 23 23:54:57.817030 kubelet[2848]: E0123 23:54:57.816959 2848 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-95\" not found" node="ip-172-31-18-95" Jan 23 23:54:57.818124 kubelet[2848]: E0123 23:54:57.818083 2848 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-95\" not found" node="ip-172-31-18-95" Jan 23 23:54:57.823231 kubelet[2848]: E0123 23:54:57.823196 2848 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-95\" not found" node="ip-172-31-18-95" Jan 23 23:54:58.829412 kubelet[2848]: E0123 23:54:58.829351 2848 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-95\" not found" node="ip-172-31-18-95" Jan 23 23:54:58.830204 kubelet[2848]: E0123 23:54:58.829959 2848 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-95\" not found" node="ip-172-31-18-95" Jan 23 23:54:58.983349 kubelet[2848]: I0123 23:54:58.983291 2848 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-95" Jan 23 23:54:59.505036 kubelet[2848]: E0123 23:54:59.503526 2848 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-95\" not found" node="ip-172-31-18-95" Jan 23 23:55:01.167993 kubelet[2848]: E0123 23:55:01.167933 2848 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-95\" not found" node="ip-172-31-18-95" Jan 23 23:55:01.601249 kubelet[2848]: I0123 23:55:01.601189 2848 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-18-95" Jan 23 23:55:01.601249 kubelet[2848]: E0123 23:55:01.601249 2848 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ip-172-31-18-95\": node \"ip-172-31-18-95\" not found" Jan 23 23:55:01.656198 kubelet[2848]: I0123 23:55:01.654346 2848 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-95" Jan 23 23:55:01.704621 kubelet[2848]: E0123 23:55:01.704553 2848 controller.go:145] "Failed to ensure lease exists, will retry" err="namespaces \"kube-node-lease\" not found" interval="3.2s" Jan 23 23:55:01.717653 kubelet[2848]: E0123 23:55:01.717536 2848 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-18-95\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-18-95" Jan 23 23:55:01.717653 kubelet[2848]: I0123 23:55:01.717655 2848 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-95" Jan 23 23:55:01.722031 kubelet[2848]: I0123 23:55:01.721689 2848 apiserver.go:52] "Watching apiserver" Jan 23 23:55:01.728466 kubelet[2848]: E0123 23:55:01.728101 2848 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-18-95\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-18-95" Jan 23 23:55:01.728466 kubelet[2848]: I0123 23:55:01.728147 2848 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-95" Jan 23 23:55:01.742918 kubelet[2848]: E0123 23:55:01.742843 2848 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-95\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-18-95" Jan 23 23:55:01.749932 kubelet[2848]: I0123 23:55:01.749065 2848 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 23:55:02.225207 update_engine[1998]: I20260123 23:55:02.225108 1998 update_attempter.cc:509] Updating boot flags... Jan 23 23:55:02.382999 kernel: BTRFS warning: duplicate device /dev/nvme0n1p3 devid 1 generation 34 scanned by (udev-worker) (3147) Jan 23 23:55:05.040865 systemd[1]: Reloading requested from client PID 3233 ('systemctl') (unit session-7.scope)... Jan 23 23:55:05.041355 systemd[1]: Reloading... Jan 23 23:55:05.224934 zram_generator::config[3276]: No configuration found. Jan 23 23:55:05.456510 systemd[1]: /usr/lib/systemd/system/docker.socket:6: ListenStream= references a path below legacy directory /var/run/, updating /var/run/docker.sock → /run/docker.sock; please update the unit file accordingly. Jan 23 23:55:05.662113 systemd[1]: Reloading finished in 620 ms. Jan 23 23:55:05.736132 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:55:05.756598 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 23:55:05.757999 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:55:05.758091 systemd[1]: kubelet.service: Consumed 2.134s CPU time, 121.1M memory peak, 0B memory swap peak. Jan 23 23:55:05.765409 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 23:55:06.231220 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 23:55:06.247353 (kubelet)[3333]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 23:55:06.376037 kubelet[3333]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 23:55:06.376037 kubelet[3333]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 23:55:06.376037 kubelet[3333]: I0123 23:55:06.375311 3333 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 23:55:06.389413 kubelet[3333]: I0123 23:55:06.388463 3333 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 23:55:06.389413 kubelet[3333]: I0123 23:55:06.388513 3333 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 23:55:06.389413 kubelet[3333]: I0123 23:55:06.388587 3333 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 23:55:06.389413 kubelet[3333]: I0123 23:55:06.388601 3333 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 23:55:06.389413 kubelet[3333]: I0123 23:55:06.389086 3333 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 23:55:06.391864 kubelet[3333]: I0123 23:55:06.391718 3333 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 23:55:06.405754 kubelet[3333]: I0123 23:55:06.403373 3333 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 23:55:06.427459 kubelet[3333]: E0123 23:55:06.426606 3333 log.go:32] "RuntimeConfig from runtime service failed" err="rpc error: code = Unimplemented desc = unknown method RuntimeConfig for service runtime.v1.RuntimeService" Jan 23 23:55:06.427927 kubelet[3333]: I0123 23:55:06.427764 3333 server.go:1400] "CRI implementation should be updated to support RuntimeConfig. Falling back to using cgroupDriver from kubelet config." Jan 23 23:55:06.435725 kubelet[3333]: I0123 23:55:06.435654 3333 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 23:55:06.436765 kubelet[3333]: I0123 23:55:06.436145 3333 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 23:55:06.436765 kubelet[3333]: I0123 23:55:06.436234 3333 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-95","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 23:55:06.436765 kubelet[3333]: I0123 23:55:06.436514 3333 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 23:55:06.436765 kubelet[3333]: I0123 23:55:06.436533 3333 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 23:55:06.437143 kubelet[3333]: I0123 23:55:06.436579 3333 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 23:55:06.438268 kubelet[3333]: I0123 23:55:06.438219 3333 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:55:06.438787 kubelet[3333]: I0123 23:55:06.438556 3333 kubelet.go:475] "Attempting to sync node with API server" Jan 23 23:55:06.438787 kubelet[3333]: I0123 23:55:06.438605 3333 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 23:55:06.438787 kubelet[3333]: I0123 23:55:06.438648 3333 kubelet.go:387] "Adding apiserver pod source" Jan 23 23:55:06.438787 kubelet[3333]: I0123 23:55:06.438679 3333 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 23:55:06.451976 kubelet[3333]: I0123 23:55:06.447298 3333 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v1.7.21" apiVersion="v1" Jan 23 23:55:06.451976 kubelet[3333]: I0123 23:55:06.448327 3333 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 23:55:06.451976 kubelet[3333]: I0123 23:55:06.448377 3333 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 23:55:06.454968 kubelet[3333]: I0123 23:55:06.454368 3333 server.go:1262] "Started kubelet" Jan 23 23:55:06.461958 kubelet[3333]: I0123 23:55:06.460990 3333 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 23:55:06.474580 kubelet[3333]: I0123 23:55:06.474509 3333 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 23:55:06.476147 kubelet[3333]: I0123 23:55:06.476038 3333 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 23:55:06.476260 kubelet[3333]: I0123 23:55:06.476168 3333 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 23:55:06.477540 kubelet[3333]: I0123 23:55:06.477495 3333 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 23:55:06.496969 kubelet[3333]: I0123 23:55:06.492080 3333 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 23:55:06.496969 kubelet[3333]: I0123 23:55:06.495297 3333 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 23:55:06.496969 kubelet[3333]: E0123 23:55:06.495621 3333 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-18-95\" not found" Jan 23 23:55:06.496969 kubelet[3333]: I0123 23:55:06.496552 3333 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 23:55:06.496969 kubelet[3333]: I0123 23:55:06.496805 3333 reconciler.go:29] "Reconciler: start to sync state" Jan 23 23:55:06.499813 kubelet[3333]: I0123 23:55:06.498832 3333 server.go:310] "Adding debug handlers to kubelet server" Jan 23 23:55:06.517855 kubelet[3333]: I0123 23:55:06.517123 3333 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 23:55:06.521067 kubelet[3333]: I0123 23:55:06.520998 3333 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 23:55:06.521067 kubelet[3333]: I0123 23:55:06.521043 3333 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 23:55:06.521276 kubelet[3333]: I0123 23:55:06.521080 3333 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 23:55:06.521276 kubelet[3333]: E0123 23:55:06.521152 3333 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 23:55:06.531049 kubelet[3333]: I0123 23:55:06.530997 3333 factory.go:223] Registration of the systemd container factory successfully Jan 23 23:55:06.531207 kubelet[3333]: I0123 23:55:06.531176 3333 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 23:55:06.559706 kubelet[3333]: E0123 23:55:06.558507 3333 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 23:55:06.559706 kubelet[3333]: I0123 23:55:06.559195 3333 factory.go:223] Registration of the containerd container factory successfully Jan 23 23:55:06.621388 kubelet[3333]: E0123 23:55:06.621334 3333 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 23:55:06.661490 kubelet[3333]: I0123 23:55:06.661406 3333 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 23:55:06.661490 kubelet[3333]: I0123 23:55:06.661439 3333 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 23:55:06.661490 kubelet[3333]: I0123 23:55:06.661476 3333 state_mem.go:36] "Initialized new in-memory state store" Jan 23 23:55:06.661719 kubelet[3333]: I0123 23:55:06.661691 3333 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 23:55:06.661789 kubelet[3333]: I0123 23:55:06.661710 3333 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 23:55:06.661789 kubelet[3333]: I0123 23:55:06.661742 3333 policy_none.go:49] "None policy: Start" Jan 23 23:55:06.661789 kubelet[3333]: I0123 23:55:06.661759 3333 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 23:55:06.661789 kubelet[3333]: I0123 23:55:06.661779 3333 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 23:55:06.662025 kubelet[3333]: I0123 23:55:06.661991 3333 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 23 23:55:06.662025 kubelet[3333]: I0123 23:55:06.662009 3333 policy_none.go:47] "Start" Jan 23 23:55:06.676064 kubelet[3333]: E0123 23:55:06.676019 3333 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 23:55:06.676948 kubelet[3333]: I0123 23:55:06.676352 3333 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 23:55:06.676948 kubelet[3333]: I0123 23:55:06.676384 3333 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 23:55:06.681247 kubelet[3333]: I0123 23:55:06.680971 3333 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 23:55:06.687935 kubelet[3333]: E0123 23:55:06.686408 3333 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 23:55:06.787954 kubelet[3333]: I0123 23:55:06.787656 3333 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-95" Jan 23 23:55:06.805820 kubelet[3333]: I0123 23:55:06.805044 3333 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-18-95" Jan 23 23:55:06.805820 kubelet[3333]: I0123 23:55:06.805167 3333 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-18-95" Jan 23 23:55:06.827474 kubelet[3333]: I0123 23:55:06.826951 3333 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-95" Jan 23 23:55:06.827474 kubelet[3333]: I0123 23:55:06.827030 3333 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-95" Jan 23 23:55:06.828147 kubelet[3333]: I0123 23:55:06.827738 3333 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-95" Jan 23 23:55:06.997870 kubelet[3333]: I0123 23:55:06.997788 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/16e644c00c0d287f45a24d50f220ec50-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-95\" (UID: \"16e644c00c0d287f45a24d50f220ec50\") " pod="kube-system/kube-scheduler-ip-172-31-18-95" Jan 23 23:55:06.997870 kubelet[3333]: I0123 23:55:06.997869 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/b25c94555dceca972166e11e1bee55c8-ca-certs\") pod \"kube-apiserver-ip-172-31-18-95\" (UID: \"b25c94555dceca972166e11e1bee55c8\") " pod="kube-system/kube-apiserver-ip-172-31-18-95" Jan 23 23:55:06.998170 kubelet[3333]: I0123 23:55:06.997938 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/b25c94555dceca972166e11e1bee55c8-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-95\" (UID: \"b25c94555dceca972166e11e1bee55c8\") " pod="kube-system/kube-apiserver-ip-172-31-18-95" Jan 23 23:55:06.998170 kubelet[3333]: I0123 23:55:06.997979 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/b25c94555dceca972166e11e1bee55c8-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-95\" (UID: \"b25c94555dceca972166e11e1bee55c8\") " pod="kube-system/kube-apiserver-ip-172-31-18-95" Jan 23 23:55:06.998170 kubelet[3333]: I0123 23:55:06.998022 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7e9e086aa157f75e5e23241a4bb090f6-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-95\" (UID: \"7e9e086aa157f75e5e23241a4bb090f6\") " pod="kube-system/kube-controller-manager-ip-172-31-18-95" Jan 23 23:55:06.998170 kubelet[3333]: I0123 23:55:06.998060 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7e9e086aa157f75e5e23241a4bb090f6-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-95\" (UID: \"7e9e086aa157f75e5e23241a4bb090f6\") " pod="kube-system/kube-controller-manager-ip-172-31-18-95" Jan 23 23:55:06.998170 kubelet[3333]: I0123 23:55:06.998093 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7e9e086aa157f75e5e23241a4bb090f6-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-95\" (UID: \"7e9e086aa157f75e5e23241a4bb090f6\") " pod="kube-system/kube-controller-manager-ip-172-31-18-95" Jan 23 23:55:06.998425 kubelet[3333]: I0123 23:55:06.998128 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7e9e086aa157f75e5e23241a4bb090f6-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-95\" (UID: \"7e9e086aa157f75e5e23241a4bb090f6\") " pod="kube-system/kube-controller-manager-ip-172-31-18-95" Jan 23 23:55:06.998425 kubelet[3333]: I0123 23:55:06.998167 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7e9e086aa157f75e5e23241a4bb090f6-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-95\" (UID: \"7e9e086aa157f75e5e23241a4bb090f6\") " pod="kube-system/kube-controller-manager-ip-172-31-18-95" Jan 23 23:55:07.444711 kubelet[3333]: I0123 23:55:07.444649 3333 apiserver.go:52] "Watching apiserver" Jan 23 23:55:07.497343 kubelet[3333]: I0123 23:55:07.497279 3333 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 23:55:07.568855 kubelet[3333]: I0123 23:55:07.568745 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-95" podStartSLOduration=1.568720889 podStartE2EDuration="1.568720889s" podCreationTimestamp="2026-01-23 23:55:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:55:07.550088537 +0000 UTC m=+1.290732944" watchObservedRunningTime="2026-01-23 23:55:07.568720889 +0000 UTC m=+1.309365260" Jan 23 23:55:07.595470 kubelet[3333]: I0123 23:55:07.595219 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-95" podStartSLOduration=1.595197149 podStartE2EDuration="1.595197149s" podCreationTimestamp="2026-01-23 23:55:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:55:07.569324189 +0000 UTC m=+1.309968584" watchObservedRunningTime="2026-01-23 23:55:07.595197149 +0000 UTC m=+1.335841532" Jan 23 23:55:07.617953 kubelet[3333]: I0123 23:55:07.617327 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-95" podStartSLOduration=1.617307173 podStartE2EDuration="1.617307173s" podCreationTimestamp="2026-01-23 23:55:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:55:07.597564389 +0000 UTC m=+1.338208772" watchObservedRunningTime="2026-01-23 23:55:07.617307173 +0000 UTC m=+1.357951544" Jan 23 23:55:12.266558 kubelet[3333]: I0123 23:55:12.266146 3333 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 23:55:12.267944 containerd[2023]: time="2026-01-23T23:55:12.267488180Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 23:55:12.268465 kubelet[3333]: I0123 23:55:12.267853 3333 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 23:55:13.383760 kubelet[3333]: E0123 23:55:13.383607 3333 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-proxy\" is forbidden: User \"system:node:ip-172-31-18-95\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-18-95' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-proxy\"" type="*v1.ConfigMap" Jan 23 23:55:13.383760 kubelet[3333]: E0123 23:55:13.383743 3333 reflector.go:205] "Failed to watch" err="failed to list *v1.ConfigMap: configmaps \"kube-root-ca.crt\" is forbidden: User \"system:node:ip-172-31-18-95\" cannot list resource \"configmaps\" in API group \"\" in the namespace \"kube-system\": no relationship found between node 'ip-172-31-18-95' and this object" logger="UnhandledError" reflector="object-\"kube-system\"/\"kube-root-ca.crt\"" type="*v1.ConfigMap" Jan 23 23:55:13.392574 systemd[1]: Created slice kubepods-besteffort-pod956f6323_3569_40d7_9975_43d6cdac49cf.slice - libcontainer container kubepods-besteffort-pod956f6323_3569_40d7_9975_43d6cdac49cf.slice. Jan 23 23:55:13.441394 kubelet[3333]: I0123 23:55:13.441321 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/956f6323-3569-40d7-9975-43d6cdac49cf-xtables-lock\") pod \"kube-proxy-82pkw\" (UID: \"956f6323-3569-40d7-9975-43d6cdac49cf\") " pod="kube-system/kube-proxy-82pkw" Jan 23 23:55:13.441394 kubelet[3333]: I0123 23:55:13.441394 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/956f6323-3569-40d7-9975-43d6cdac49cf-lib-modules\") pod \"kube-proxy-82pkw\" (UID: \"956f6323-3569-40d7-9975-43d6cdac49cf\") " pod="kube-system/kube-proxy-82pkw" Jan 23 23:55:13.441620 kubelet[3333]: I0123 23:55:13.441438 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/956f6323-3569-40d7-9975-43d6cdac49cf-kube-proxy\") pod \"kube-proxy-82pkw\" (UID: \"956f6323-3569-40d7-9975-43d6cdac49cf\") " pod="kube-system/kube-proxy-82pkw" Jan 23 23:55:13.441620 kubelet[3333]: I0123 23:55:13.441473 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kdlcq\" (UniqueName: \"kubernetes.io/projected/956f6323-3569-40d7-9975-43d6cdac49cf-kube-api-access-kdlcq\") pod \"kube-proxy-82pkw\" (UID: \"956f6323-3569-40d7-9975-43d6cdac49cf\") " pod="kube-system/kube-proxy-82pkw" Jan 23 23:55:13.543509 kubelet[3333]: I0123 23:55:13.543248 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/7b0d0a3f-1ea4-4af1-bc3f-d81d4cdaf1fc-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-n58rp\" (UID: \"7b0d0a3f-1ea4-4af1-bc3f-d81d4cdaf1fc\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-n58rp" Jan 23 23:55:13.543509 kubelet[3333]: I0123 23:55:13.543365 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqrhr\" (UniqueName: \"kubernetes.io/projected/7b0d0a3f-1ea4-4af1-bc3f-d81d4cdaf1fc-kube-api-access-tqrhr\") pod \"tigera-operator-65cdcdfd6d-n58rp\" (UID: \"7b0d0a3f-1ea4-4af1-bc3f-d81d4cdaf1fc\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-n58rp" Jan 23 23:55:13.547843 systemd[1]: Created slice kubepods-besteffort-pod7b0d0a3f_1ea4_4af1_bc3f_d81d4cdaf1fc.slice - libcontainer container kubepods-besteffort-pod7b0d0a3f_1ea4_4af1_bc3f_d81d4cdaf1fc.slice. Jan 23 23:55:13.861395 containerd[2023]: time="2026-01-23T23:55:13.861274404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-n58rp,Uid:7b0d0a3f-1ea4-4af1-bc3f-d81d4cdaf1fc,Namespace:tigera-operator,Attempt:0,}" Jan 23 23:55:13.902942 containerd[2023]: time="2026-01-23T23:55:13.902560056Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:55:13.902942 containerd[2023]: time="2026-01-23T23:55:13.902677332Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:55:13.902942 containerd[2023]: time="2026-01-23T23:55:13.902732484Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:13.904283 containerd[2023]: time="2026-01-23T23:55:13.904057248Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:13.942226 systemd[1]: Started cri-containerd-ea175fa49ace55f98d85b38a08f92eff77407b991c409e273ac43a53764353bc.scope - libcontainer container ea175fa49ace55f98d85b38a08f92eff77407b991c409e273ac43a53764353bc. Jan 23 23:55:14.011415 containerd[2023]: time="2026-01-23T23:55:14.011341965Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-n58rp,Uid:7b0d0a3f-1ea4-4af1-bc3f-d81d4cdaf1fc,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"ea175fa49ace55f98d85b38a08f92eff77407b991c409e273ac43a53764353bc\"" Jan 23 23:55:14.015813 containerd[2023]: time="2026-01-23T23:55:14.015746349Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 23:55:14.543628 kubelet[3333]: E0123 23:55:14.543571 3333 configmap.go:193] Couldn't get configMap kube-system/kube-proxy: failed to sync configmap cache: timed out waiting for the condition Jan 23 23:55:14.544392 kubelet[3333]: E0123 23:55:14.543694 3333 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/956f6323-3569-40d7-9975-43d6cdac49cf-kube-proxy podName:956f6323-3569-40d7-9975-43d6cdac49cf nodeName:}" failed. No retries permitted until 2026-01-23 23:55:15.043660371 +0000 UTC m=+8.784304742 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-proxy" (UniqueName: "kubernetes.io/configmap/956f6323-3569-40d7-9975-43d6cdac49cf-kube-proxy") pod "kube-proxy-82pkw" (UID: "956f6323-3569-40d7-9975-43d6cdac49cf") : failed to sync configmap cache: timed out waiting for the condition Jan 23 23:55:15.222108 containerd[2023]: time="2026-01-23T23:55:15.219241775Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-82pkw,Uid:956f6323-3569-40d7-9975-43d6cdac49cf,Namespace:kube-system,Attempt:0,}" Jan 23 23:55:15.297794 containerd[2023]: time="2026-01-23T23:55:15.297019115Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:55:15.297794 containerd[2023]: time="2026-01-23T23:55:15.297117239Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:55:15.297794 containerd[2023]: time="2026-01-23T23:55:15.297146111Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:15.297794 containerd[2023]: time="2026-01-23T23:55:15.297313283Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:15.341760 systemd[1]: run-containerd-runc-k8s.io-38154bbeaa612809713e471e307604a49a1cdc21210d3f8c0d70e53499f4721d-runc.B9mKtH.mount: Deactivated successfully. Jan 23 23:55:15.355268 systemd[1]: Started cri-containerd-38154bbeaa612809713e471e307604a49a1cdc21210d3f8c0d70e53499f4721d.scope - libcontainer container 38154bbeaa612809713e471e307604a49a1cdc21210d3f8c0d70e53499f4721d. Jan 23 23:55:15.424095 containerd[2023]: time="2026-01-23T23:55:15.424043772Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-82pkw,Uid:956f6323-3569-40d7-9975-43d6cdac49cf,Namespace:kube-system,Attempt:0,} returns sandbox id \"38154bbeaa612809713e471e307604a49a1cdc21210d3f8c0d70e53499f4721d\"" Jan 23 23:55:15.435617 containerd[2023]: time="2026-01-23T23:55:15.435525024Z" level=info msg="CreateContainer within sandbox \"38154bbeaa612809713e471e307604a49a1cdc21210d3f8c0d70e53499f4721d\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 23:55:15.462168 containerd[2023]: time="2026-01-23T23:55:15.462099360Z" level=info msg="CreateContainer within sandbox \"38154bbeaa612809713e471e307604a49a1cdc21210d3f8c0d70e53499f4721d\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"661c4c41df7d03f6e3f336207272a2da977668964cf8e705e5372f2923b62056\"" Jan 23 23:55:15.464547 containerd[2023]: time="2026-01-23T23:55:15.464298528Z" level=info msg="StartContainer for \"661c4c41df7d03f6e3f336207272a2da977668964cf8e705e5372f2923b62056\"" Jan 23 23:55:15.553825 systemd[1]: Started cri-containerd-661c4c41df7d03f6e3f336207272a2da977668964cf8e705e5372f2923b62056.scope - libcontainer container 661c4c41df7d03f6e3f336207272a2da977668964cf8e705e5372f2923b62056. Jan 23 23:55:15.649121 containerd[2023]: time="2026-01-23T23:55:15.649060045Z" level=info msg="StartContainer for \"661c4c41df7d03f6e3f336207272a2da977668964cf8e705e5372f2923b62056\" returns successfully" Jan 23 23:55:16.254592 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2532274850.mount: Deactivated successfully. Jan 23 23:55:16.549166 containerd[2023]: time="2026-01-23T23:55:16.547602817Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:16.551308 containerd[2023]: time="2026-01-23T23:55:16.551140117Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Jan 23 23:55:16.552368 containerd[2023]: time="2026-01-23T23:55:16.552166729Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:16.562670 containerd[2023]: time="2026-01-23T23:55:16.561931417Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:16.568984 containerd[2023]: time="2026-01-23T23:55:16.566210341Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.55038298s" Jan 23 23:55:16.569144 containerd[2023]: time="2026-01-23T23:55:16.568979341Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 23 23:55:16.583849 containerd[2023]: time="2026-01-23T23:55:16.583634929Z" level=info msg="CreateContainer within sandbox \"ea175fa49ace55f98d85b38a08f92eff77407b991c409e273ac43a53764353bc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 23:55:16.604438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2430497327.mount: Deactivated successfully. Jan 23 23:55:16.614212 containerd[2023]: time="2026-01-23T23:55:16.614153138Z" level=info msg="CreateContainer within sandbox \"ea175fa49ace55f98d85b38a08f92eff77407b991c409e273ac43a53764353bc\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"d3dd1281346e78a6352e0d1cd3f4968689e5e2df30f770719f64e4c7a7403f5a\"" Jan 23 23:55:16.616935 containerd[2023]: time="2026-01-23T23:55:16.615590474Z" level=info msg="StartContainer for \"d3dd1281346e78a6352e0d1cd3f4968689e5e2df30f770719f64e4c7a7403f5a\"" Jan 23 23:55:16.737552 systemd[1]: Started cri-containerd-d3dd1281346e78a6352e0d1cd3f4968689e5e2df30f770719f64e4c7a7403f5a.scope - libcontainer container d3dd1281346e78a6352e0d1cd3f4968689e5e2df30f770719f64e4c7a7403f5a. Jan 23 23:55:16.804859 containerd[2023]: time="2026-01-23T23:55:16.804736275Z" level=info msg="StartContainer for \"d3dd1281346e78a6352e0d1cd3f4968689e5e2df30f770719f64e4c7a7403f5a\" returns successfully" Jan 23 23:55:17.693105 kubelet[3333]: I0123 23:55:17.692804 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-82pkw" podStartSLOduration=4.692782323 podStartE2EDuration="4.692782323s" podCreationTimestamp="2026-01-23 23:55:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:55:16.720996974 +0000 UTC m=+10.461641357" watchObservedRunningTime="2026-01-23 23:55:17.692782323 +0000 UTC m=+11.433426694" Jan 23 23:55:23.520534 sudo[2340]: pam_unix(sudo:session): session closed for user root Jan 23 23:55:23.602464 sshd[2337]: pam_unix(sshd:session): session closed for user core Jan 23 23:55:23.614734 systemd[1]: sshd@6-172.31.18.95:22-4.153.228.146:44344.service: Deactivated successfully. Jan 23 23:55:23.621753 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 23:55:23.622249 systemd[1]: session-7.scope: Consumed 12.366s CPU time, 154.5M memory peak, 0B memory swap peak. Jan 23 23:55:23.624392 systemd-logind[1997]: Session 7 logged out. Waiting for processes to exit. Jan 23 23:55:23.626650 systemd-logind[1997]: Removed session 7. Jan 23 23:55:40.076711 kubelet[3333]: I0123 23:55:40.076565 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-n58rp" podStartSLOduration=24.518947474 podStartE2EDuration="27.076544902s" podCreationTimestamp="2026-01-23 23:55:13 +0000 UTC" firstStartedPulling="2026-01-23 23:55:14.015195045 +0000 UTC m=+7.755839428" lastFinishedPulling="2026-01-23 23:55:16.572792497 +0000 UTC m=+10.313436856" observedRunningTime="2026-01-23 23:55:17.694947603 +0000 UTC m=+11.435591998" watchObservedRunningTime="2026-01-23 23:55:40.076544902 +0000 UTC m=+33.817189273" Jan 23 23:55:40.097304 systemd[1]: Created slice kubepods-besteffort-podd343cc17_1e84_450c_8a50_dfaadb68fa18.slice - libcontainer container kubepods-besteffort-podd343cc17_1e84_450c_8a50_dfaadb68fa18.slice. Jan 23 23:55:40.146329 kubelet[3333]: I0123 23:55:40.146274 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d343cc17-1e84-450c-8a50-dfaadb68fa18-tigera-ca-bundle\") pod \"calico-typha-5588b6d57d-pts8f\" (UID: \"d343cc17-1e84-450c-8a50-dfaadb68fa18\") " pod="calico-system/calico-typha-5588b6d57d-pts8f" Jan 23 23:55:40.146614 kubelet[3333]: I0123 23:55:40.146579 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/d343cc17-1e84-450c-8a50-dfaadb68fa18-typha-certs\") pod \"calico-typha-5588b6d57d-pts8f\" (UID: \"d343cc17-1e84-450c-8a50-dfaadb68fa18\") " pod="calico-system/calico-typha-5588b6d57d-pts8f" Jan 23 23:55:40.146738 kubelet[3333]: I0123 23:55:40.146637 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t72c4\" (UniqueName: \"kubernetes.io/projected/d343cc17-1e84-450c-8a50-dfaadb68fa18-kube-api-access-t72c4\") pod \"calico-typha-5588b6d57d-pts8f\" (UID: \"d343cc17-1e84-450c-8a50-dfaadb68fa18\") " pod="calico-system/calico-typha-5588b6d57d-pts8f" Jan 23 23:55:40.359980 systemd[1]: Created slice kubepods-besteffort-podc5ca2e24_260a_4a15_b489_065a69d6099b.slice - libcontainer container kubepods-besteffort-podc5ca2e24_260a_4a15_b489_065a69d6099b.slice. Jan 23 23:55:40.409134 containerd[2023]: time="2026-01-23T23:55:40.409063800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5588b6d57d-pts8f,Uid:d343cc17-1e84-450c-8a50-dfaadb68fa18,Namespace:calico-system,Attempt:0,}" Jan 23 23:55:40.449629 kubelet[3333]: I0123 23:55:40.448643 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/c5ca2e24-260a-4a15-b489-065a69d6099b-cni-net-dir\") pod \"calico-node-hv49q\" (UID: \"c5ca2e24-260a-4a15-b489-065a69d6099b\") " pod="calico-system/calico-node-hv49q" Jan 23 23:55:40.449629 kubelet[3333]: I0123 23:55:40.448725 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c5ca2e24-260a-4a15-b489-065a69d6099b-var-lib-calico\") pod \"calico-node-hv49q\" (UID: \"c5ca2e24-260a-4a15-b489-065a69d6099b\") " pod="calico-system/calico-node-hv49q" Jan 23 23:55:40.449629 kubelet[3333]: I0123 23:55:40.448767 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/c5ca2e24-260a-4a15-b489-065a69d6099b-cni-bin-dir\") pod \"calico-node-hv49q\" (UID: \"c5ca2e24-260a-4a15-b489-065a69d6099b\") " pod="calico-system/calico-node-hv49q" Jan 23 23:55:40.449629 kubelet[3333]: I0123 23:55:40.448803 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/c5ca2e24-260a-4a15-b489-065a69d6099b-policysync\") pod \"calico-node-hv49q\" (UID: \"c5ca2e24-260a-4a15-b489-065a69d6099b\") " pod="calico-system/calico-node-hv49q" Jan 23 23:55:40.449629 kubelet[3333]: I0123 23:55:40.448866 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/c5ca2e24-260a-4a15-b489-065a69d6099b-tigera-ca-bundle\") pod \"calico-node-hv49q\" (UID: \"c5ca2e24-260a-4a15-b489-065a69d6099b\") " pod="calico-system/calico-node-hv49q" Jan 23 23:55:40.450505 kubelet[3333]: I0123 23:55:40.448979 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dr9g\" (UniqueName: \"kubernetes.io/projected/c5ca2e24-260a-4a15-b489-065a69d6099b-kube-api-access-7dr9g\") pod \"calico-node-hv49q\" (UID: \"c5ca2e24-260a-4a15-b489-065a69d6099b\") " pod="calico-system/calico-node-hv49q" Jan 23 23:55:40.450505 kubelet[3333]: I0123 23:55:40.449156 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/c5ca2e24-260a-4a15-b489-065a69d6099b-cni-log-dir\") pod \"calico-node-hv49q\" (UID: \"c5ca2e24-260a-4a15-b489-065a69d6099b\") " pod="calico-system/calico-node-hv49q" Jan 23 23:55:40.450505 kubelet[3333]: I0123 23:55:40.449294 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/c5ca2e24-260a-4a15-b489-065a69d6099b-node-certs\") pod \"calico-node-hv49q\" (UID: \"c5ca2e24-260a-4a15-b489-065a69d6099b\") " pod="calico-system/calico-node-hv49q" Jan 23 23:55:40.450505 kubelet[3333]: I0123 23:55:40.449562 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/c5ca2e24-260a-4a15-b489-065a69d6099b-var-run-calico\") pod \"calico-node-hv49q\" (UID: \"c5ca2e24-260a-4a15-b489-065a69d6099b\") " pod="calico-system/calico-node-hv49q" Jan 23 23:55:40.450505 kubelet[3333]: I0123 23:55:40.449625 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/c5ca2e24-260a-4a15-b489-065a69d6099b-xtables-lock\") pod \"calico-node-hv49q\" (UID: \"c5ca2e24-260a-4a15-b489-065a69d6099b\") " pod="calico-system/calico-node-hv49q" Jan 23 23:55:40.453153 kubelet[3333]: I0123 23:55:40.449668 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/c5ca2e24-260a-4a15-b489-065a69d6099b-flexvol-driver-host\") pod \"calico-node-hv49q\" (UID: \"c5ca2e24-260a-4a15-b489-065a69d6099b\") " pod="calico-system/calico-node-hv49q" Jan 23 23:55:40.453153 kubelet[3333]: I0123 23:55:40.449703 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/c5ca2e24-260a-4a15-b489-065a69d6099b-lib-modules\") pod \"calico-node-hv49q\" (UID: \"c5ca2e24-260a-4a15-b489-065a69d6099b\") " pod="calico-system/calico-node-hv49q" Jan 23 23:55:40.483943 containerd[2023]: time="2026-01-23T23:55:40.483733140Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:55:40.485010 containerd[2023]: time="2026-01-23T23:55:40.483859008Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:55:40.487200 containerd[2023]: time="2026-01-23T23:55:40.486852036Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:40.488926 containerd[2023]: time="2026-01-23T23:55:40.488775000Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:40.509669 kubelet[3333]: E0123 23:55:40.508777 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bsxtr" podUID="16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0" Jan 23 23:55:40.558996 kubelet[3333]: E0123 23:55:40.558957 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.560283 systemd[1]: Started cri-containerd-0b1fdee34c2cf8a4695b5788aac215cddbef7c5750e246ba97e298c14977bae3.scope - libcontainer container 0b1fdee34c2cf8a4695b5788aac215cddbef7c5750e246ba97e298c14977bae3. Jan 23 23:55:40.561402 kubelet[3333]: W0123 23:55:40.561225 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.565728 kubelet[3333]: E0123 23:55:40.565365 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.574315 kubelet[3333]: E0123 23:55:40.574175 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.574315 kubelet[3333]: W0123 23:55:40.574243 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.574822 kubelet[3333]: E0123 23:55:40.574278 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.608730 kubelet[3333]: E0123 23:55:40.608681 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.608730 kubelet[3333]: W0123 23:55:40.608718 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.609093 kubelet[3333]: E0123 23:55:40.608750 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.609758 kubelet[3333]: E0123 23:55:40.609702 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.609758 kubelet[3333]: W0123 23:55:40.609738 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.610042 kubelet[3333]: E0123 23:55:40.609769 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.613843 kubelet[3333]: E0123 23:55:40.612137 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.613843 kubelet[3333]: W0123 23:55:40.612187 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.613843 kubelet[3333]: E0123 23:55:40.612263 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.613843 kubelet[3333]: E0123 23:55:40.613429 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.613843 kubelet[3333]: W0123 23:55:40.613453 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.613843 kubelet[3333]: E0123 23:55:40.613478 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.614225 kubelet[3333]: E0123 23:55:40.614039 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.614225 kubelet[3333]: W0123 23:55:40.614060 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.614225 kubelet[3333]: E0123 23:55:40.614085 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.616268 kubelet[3333]: E0123 23:55:40.616216 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.616268 kubelet[3333]: W0123 23:55:40.616254 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.616268 kubelet[3333]: E0123 23:55:40.616289 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.616924 kubelet[3333]: E0123 23:55:40.616689 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.616924 kubelet[3333]: W0123 23:55:40.616711 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.616924 kubelet[3333]: E0123 23:55:40.616734 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.617387 kubelet[3333]: E0123 23:55:40.617289 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.617387 kubelet[3333]: W0123 23:55:40.617328 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.617387 kubelet[3333]: E0123 23:55:40.617357 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.617977 kubelet[3333]: E0123 23:55:40.617706 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.617977 kubelet[3333]: W0123 23:55:40.617725 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.617977 kubelet[3333]: E0123 23:55:40.617747 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.619240 kubelet[3333]: E0123 23:55:40.619189 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.619240 kubelet[3333]: W0123 23:55:40.619227 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.622146 kubelet[3333]: E0123 23:55:40.619262 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.622619 kubelet[3333]: E0123 23:55:40.622573 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.622619 kubelet[3333]: W0123 23:55:40.622611 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.622819 kubelet[3333]: E0123 23:55:40.622646 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.623308 kubelet[3333]: E0123 23:55:40.623268 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.623308 kubelet[3333]: W0123 23:55:40.623301 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.623490 kubelet[3333]: E0123 23:55:40.623328 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.624515 kubelet[3333]: E0123 23:55:40.624468 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.624515 kubelet[3333]: W0123 23:55:40.624504 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.624734 kubelet[3333]: E0123 23:55:40.624537 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.625769 kubelet[3333]: E0123 23:55:40.625714 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.625769 kubelet[3333]: W0123 23:55:40.625751 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.625769 kubelet[3333]: E0123 23:55:40.625785 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.626210 kubelet[3333]: E0123 23:55:40.626175 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.626210 kubelet[3333]: W0123 23:55:40.626203 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.626358 kubelet[3333]: E0123 23:55:40.626227 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.627312 kubelet[3333]: E0123 23:55:40.627263 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.627312 kubelet[3333]: W0123 23:55:40.627301 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.627526 kubelet[3333]: E0123 23:55:40.627335 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.628027 kubelet[3333]: E0123 23:55:40.627980 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.628027 kubelet[3333]: W0123 23:55:40.628014 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.628223 kubelet[3333]: E0123 23:55:40.628041 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.628970 kubelet[3333]: E0123 23:55:40.628835 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.628970 kubelet[3333]: W0123 23:55:40.628872 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.628970 kubelet[3333]: E0123 23:55:40.628923 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.630185 kubelet[3333]: E0123 23:55:40.630021 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.630185 kubelet[3333]: W0123 23:55:40.630056 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.630185 kubelet[3333]: E0123 23:55:40.630088 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.630580 kubelet[3333]: E0123 23:55:40.630544 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.630580 kubelet[3333]: W0123 23:55:40.630573 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.630719 kubelet[3333]: E0123 23:55:40.630597 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.631745 kubelet[3333]: E0123 23:55:40.631698 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.631745 kubelet[3333]: W0123 23:55:40.631735 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.631972 kubelet[3333]: E0123 23:55:40.631770 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.653088 kubelet[3333]: E0123 23:55:40.653051 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.653683 kubelet[3333]: W0123 23:55:40.653524 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.653683 kubelet[3333]: E0123 23:55:40.653568 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.654331 kubelet[3333]: I0123 23:55:40.653644 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0-varrun\") pod \"csi-node-driver-bsxtr\" (UID: \"16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0\") " pod="calico-system/csi-node-driver-bsxtr" Jan 23 23:55:40.656942 kubelet[3333]: E0123 23:55:40.655539 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.656942 kubelet[3333]: W0123 23:55:40.655576 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.656942 kubelet[3333]: E0123 23:55:40.655609 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.657517 kubelet[3333]: E0123 23:55:40.657380 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.657517 kubelet[3333]: W0123 23:55:40.657435 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.657754 kubelet[3333]: E0123 23:55:40.657472 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.658681 kubelet[3333]: E0123 23:55:40.658348 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.658681 kubelet[3333]: W0123 23:55:40.658393 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.658681 kubelet[3333]: E0123 23:55:40.658425 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.658681 kubelet[3333]: I0123 23:55:40.658478 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0-kubelet-dir\") pod \"csi-node-driver-bsxtr\" (UID: \"16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0\") " pod="calico-system/csi-node-driver-bsxtr" Jan 23 23:55:40.659746 kubelet[3333]: E0123 23:55:40.659697 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.660055 kubelet[3333]: W0123 23:55:40.659921 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.660055 kubelet[3333]: E0123 23:55:40.659984 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.660408 kubelet[3333]: I0123 23:55:40.660218 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0-registration-dir\") pod \"csi-node-driver-bsxtr\" (UID: \"16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0\") " pod="calico-system/csi-node-driver-bsxtr" Jan 23 23:55:40.662019 kubelet[3333]: E0123 23:55:40.660484 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.662019 kubelet[3333]: W0123 23:55:40.660506 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.662019 kubelet[3333]: E0123 23:55:40.660530 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.662019 kubelet[3333]: E0123 23:55:40.660882 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.662019 kubelet[3333]: W0123 23:55:40.660954 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.662019 kubelet[3333]: E0123 23:55:40.660977 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.663606 kubelet[3333]: E0123 23:55:40.663552 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.663606 kubelet[3333]: W0123 23:55:40.663592 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.663606 kubelet[3333]: E0123 23:55:40.663626 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.664147 kubelet[3333]: I0123 23:55:40.663682 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bxb7h\" (UniqueName: \"kubernetes.io/projected/16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0-kube-api-access-bxb7h\") pod \"csi-node-driver-bsxtr\" (UID: \"16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0\") " pod="calico-system/csi-node-driver-bsxtr" Jan 23 23:55:40.664392 kubelet[3333]: E0123 23:55:40.664364 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.664392 kubelet[3333]: W0123 23:55:40.664434 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.664392 kubelet[3333]: E0123 23:55:40.664465 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.665052 kubelet[3333]: E0123 23:55:40.665004 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.665052 kubelet[3333]: W0123 23:55:40.665043 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.665531 kubelet[3333]: E0123 23:55:40.665080 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.665531 kubelet[3333]: I0123 23:55:40.665135 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0-socket-dir\") pod \"csi-node-driver-bsxtr\" (UID: \"16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0\") " pod="calico-system/csi-node-driver-bsxtr" Jan 23 23:55:40.665877 kubelet[3333]: E0123 23:55:40.665821 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.666616 kubelet[3333]: W0123 23:55:40.666398 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.666616 kubelet[3333]: E0123 23:55:40.666478 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.668500 kubelet[3333]: E0123 23:55:40.668133 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.668500 kubelet[3333]: W0123 23:55:40.668184 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.668500 kubelet[3333]: E0123 23:55:40.668220 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.669868 kubelet[3333]: E0123 23:55:40.669812 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.670062 kubelet[3333]: W0123 23:55:40.670036 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.670238 kubelet[3333]: E0123 23:55:40.670136 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.671698 kubelet[3333]: E0123 23:55:40.670773 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.671698 kubelet[3333]: W0123 23:55:40.670803 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.671698 kubelet[3333]: E0123 23:55:40.670832 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.672246 kubelet[3333]: E0123 23:55:40.672203 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.672367 kubelet[3333]: W0123 23:55:40.672343 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.673147 kubelet[3333]: E0123 23:55:40.673048 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.675014 containerd[2023]: time="2026-01-23T23:55:40.674129713Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hv49q,Uid:c5ca2e24-260a-4a15-b489-065a69d6099b,Namespace:calico-system,Attempt:0,}" Jan 23 23:55:40.728037 containerd[2023]: time="2026-01-23T23:55:40.727713061Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:55:40.728037 containerd[2023]: time="2026-01-23T23:55:40.727827181Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:55:40.728037 containerd[2023]: time="2026-01-23T23:55:40.727865857Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:40.730248 containerd[2023]: time="2026-01-23T23:55:40.730044265Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:55:40.768814 systemd[1]: Started cri-containerd-b92e77ca68c086ffa3bf6d9ed001aea341883a6be757dab3fca13b79ec4309cf.scope - libcontainer container b92e77ca68c086ffa3bf6d9ed001aea341883a6be757dab3fca13b79ec4309cf. Jan 23 23:55:40.771347 kubelet[3333]: E0123 23:55:40.771278 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.771347 kubelet[3333]: W0123 23:55:40.771317 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.771347 kubelet[3333]: E0123 23:55:40.771349 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.774637 kubelet[3333]: E0123 23:55:40.772745 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.774637 kubelet[3333]: W0123 23:55:40.772786 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.774637 kubelet[3333]: E0123 23:55:40.772818 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.774637 kubelet[3333]: E0123 23:55:40.773286 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.774637 kubelet[3333]: W0123 23:55:40.773306 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.774637 kubelet[3333]: E0123 23:55:40.773329 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.774637 kubelet[3333]: E0123 23:55:40.773636 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.774637 kubelet[3333]: W0123 23:55:40.773696 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.774637 kubelet[3333]: E0123 23:55:40.773720 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.774637 kubelet[3333]: E0123 23:55:40.774164 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.775344 kubelet[3333]: W0123 23:55:40.774221 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.775344 kubelet[3333]: E0123 23:55:40.774247 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.775344 kubelet[3333]: E0123 23:55:40.774848 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.775344 kubelet[3333]: W0123 23:55:40.774869 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.775344 kubelet[3333]: E0123 23:55:40.774922 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.775344 kubelet[3333]: E0123 23:55:40.775272 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.775344 kubelet[3333]: W0123 23:55:40.775290 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.776116 kubelet[3333]: E0123 23:55:40.775357 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.776116 kubelet[3333]: E0123 23:55:40.776388 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.776116 kubelet[3333]: W0123 23:55:40.776412 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.776648 kubelet[3333]: E0123 23:55:40.776442 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.777270 kubelet[3333]: E0123 23:55:40.776990 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.777270 kubelet[3333]: W0123 23:55:40.777035 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.777270 kubelet[3333]: E0123 23:55:40.777177 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.779502 kubelet[3333]: E0123 23:55:40.778230 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.779502 kubelet[3333]: W0123 23:55:40.778265 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.779502 kubelet[3333]: E0123 23:55:40.778293 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.779502 kubelet[3333]: E0123 23:55:40.779019 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.779502 kubelet[3333]: W0123 23:55:40.779041 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.779502 kubelet[3333]: E0123 23:55:40.779063 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.779880 kubelet[3333]: E0123 23:55:40.779775 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.779880 kubelet[3333]: W0123 23:55:40.779798 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.779880 kubelet[3333]: E0123 23:55:40.779850 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.780446 kubelet[3333]: E0123 23:55:40.780340 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.780446 kubelet[3333]: W0123 23:55:40.780371 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.780446 kubelet[3333]: E0123 23:55:40.780426 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.781230 kubelet[3333]: E0123 23:55:40.781192 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.781438 kubelet[3333]: W0123 23:55:40.781223 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.781438 kubelet[3333]: E0123 23:55:40.781380 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.782348 kubelet[3333]: E0123 23:55:40.782111 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.782348 kubelet[3333]: W0123 23:55:40.782165 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.782348 kubelet[3333]: E0123 23:55:40.782194 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.783023 kubelet[3333]: E0123 23:55:40.782991 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.783023 kubelet[3333]: W0123 23:55:40.783012 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.783392 kubelet[3333]: E0123 23:55:40.783040 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.783784 kubelet[3333]: E0123 23:55:40.783563 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.783784 kubelet[3333]: W0123 23:55:40.783609 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.783784 kubelet[3333]: E0123 23:55:40.783635 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.784955 kubelet[3333]: E0123 23:55:40.784238 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.784955 kubelet[3333]: W0123 23:55:40.784269 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.784955 kubelet[3333]: E0123 23:55:40.784295 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.784955 kubelet[3333]: E0123 23:55:40.784925 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.784955 kubelet[3333]: W0123 23:55:40.784947 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.785342 kubelet[3333]: E0123 23:55:40.784975 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.786403 kubelet[3333]: E0123 23:55:40.786312 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.786403 kubelet[3333]: W0123 23:55:40.786375 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.786768 kubelet[3333]: E0123 23:55:40.786441 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.787474 kubelet[3333]: E0123 23:55:40.786957 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.787474 kubelet[3333]: W0123 23:55:40.786989 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.787474 kubelet[3333]: E0123 23:55:40.787015 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.787474 kubelet[3333]: E0123 23:55:40.787447 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.787474 kubelet[3333]: W0123 23:55:40.787470 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.787813 kubelet[3333]: E0123 23:55:40.787492 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.788870 kubelet[3333]: E0123 23:55:40.788145 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.788870 kubelet[3333]: W0123 23:55:40.788177 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.788870 kubelet[3333]: E0123 23:55:40.788205 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.791487 kubelet[3333]: E0123 23:55:40.791437 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.791487 kubelet[3333]: W0123 23:55:40.791463 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.791594 kubelet[3333]: E0123 23:55:40.791493 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.792337 kubelet[3333]: E0123 23:55:40.792012 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.792337 kubelet[3333]: W0123 23:55:40.792045 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.792337 kubelet[3333]: E0123 23:55:40.792072 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.814517 kubelet[3333]: E0123 23:55:40.814333 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:40.814517 kubelet[3333]: W0123 23:55:40.814493 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:40.814707 kubelet[3333]: E0123 23:55:40.814657 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:40.815640 containerd[2023]: time="2026-01-23T23:55:40.815032934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-5588b6d57d-pts8f,Uid:d343cc17-1e84-450c-8a50-dfaadb68fa18,Namespace:calico-system,Attempt:0,} returns sandbox id \"0b1fdee34c2cf8a4695b5788aac215cddbef7c5750e246ba97e298c14977bae3\"" Jan 23 23:55:40.819472 containerd[2023]: time="2026-01-23T23:55:40.818741030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 23:55:40.859488 containerd[2023]: time="2026-01-23T23:55:40.859388150Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-hv49q,Uid:c5ca2e24-260a-4a15-b489-065a69d6099b,Namespace:calico-system,Attempt:0,} returns sandbox id \"b92e77ca68c086ffa3bf6d9ed001aea341883a6be757dab3fca13b79ec4309cf\"" Jan 23 23:55:42.443243 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1176560967.mount: Deactivated successfully. Jan 23 23:55:42.524852 kubelet[3333]: E0123 23:55:42.522793 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bsxtr" podUID="16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0" Jan 23 23:55:43.547150 containerd[2023]: time="2026-01-23T23:55:43.546768267Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:43.549116 containerd[2023]: time="2026-01-23T23:55:43.548590167Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Jan 23 23:55:43.549116 containerd[2023]: time="2026-01-23T23:55:43.549044979Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:43.553414 containerd[2023]: time="2026-01-23T23:55:43.553283607Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:43.556469 containerd[2023]: time="2026-01-23T23:55:43.555153555Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.736295669s" Jan 23 23:55:43.556469 containerd[2023]: time="2026-01-23T23:55:43.555219231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 23 23:55:43.557058 containerd[2023]: time="2026-01-23T23:55:43.557013231Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 23:55:43.597236 containerd[2023]: time="2026-01-23T23:55:43.597151264Z" level=info msg="CreateContainer within sandbox \"0b1fdee34c2cf8a4695b5788aac215cddbef7c5750e246ba97e298c14977bae3\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 23:55:43.622422 containerd[2023]: time="2026-01-23T23:55:43.621656536Z" level=info msg="CreateContainer within sandbox \"0b1fdee34c2cf8a4695b5788aac215cddbef7c5750e246ba97e298c14977bae3\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"622f149fd269f5e837d3d1d95e492aade1fcf046a3facda0989cbf5e06dff61b\"" Jan 23 23:55:43.627035 containerd[2023]: time="2026-01-23T23:55:43.625258396Z" level=info msg="StartContainer for \"622f149fd269f5e837d3d1d95e492aade1fcf046a3facda0989cbf5e06dff61b\"" Jan 23 23:55:43.684258 systemd[1]: Started cri-containerd-622f149fd269f5e837d3d1d95e492aade1fcf046a3facda0989cbf5e06dff61b.scope - libcontainer container 622f149fd269f5e837d3d1d95e492aade1fcf046a3facda0989cbf5e06dff61b. Jan 23 23:55:43.761944 containerd[2023]: time="2026-01-23T23:55:43.761841304Z" level=info msg="StartContainer for \"622f149fd269f5e837d3d1d95e492aade1fcf046a3facda0989cbf5e06dff61b\" returns successfully" Jan 23 23:55:43.812562 kubelet[3333]: I0123 23:55:43.812369 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-5588b6d57d-pts8f" podStartSLOduration=1.073270368 podStartE2EDuration="3.812346665s" podCreationTimestamp="2026-01-23 23:55:40 +0000 UTC" firstStartedPulling="2026-01-23 23:55:40.817634882 +0000 UTC m=+34.558279265" lastFinishedPulling="2026-01-23 23:55:43.556711203 +0000 UTC m=+37.297355562" observedRunningTime="2026-01-23 23:55:43.809315105 +0000 UTC m=+37.549959500" watchObservedRunningTime="2026-01-23 23:55:43.812346665 +0000 UTC m=+37.552991072" Jan 23 23:55:43.865048 kubelet[3333]: E0123 23:55:43.863417 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:43.865048 kubelet[3333]: W0123 23:55:43.863623 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:43.865048 kubelet[3333]: E0123 23:55:43.863665 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:43.866461 kubelet[3333]: E0123 23:55:43.866410 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:43.866461 kubelet[3333]: W0123 23:55:43.866452 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:43.866620 kubelet[3333]: E0123 23:55:43.866486 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:43.867515 kubelet[3333]: E0123 23:55:43.867446 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:43.867515 kubelet[3333]: W0123 23:55:43.867504 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:43.867645 kubelet[3333]: E0123 23:55:43.867534 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:43.868657 kubelet[3333]: E0123 23:55:43.868048 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:43.868657 kubelet[3333]: W0123 23:55:43.868077 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:43.868657 kubelet[3333]: E0123 23:55:43.868101 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:43.868657 kubelet[3333]: E0123 23:55:43.868469 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:43.868657 kubelet[3333]: W0123 23:55:43.868486 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:43.868657 kubelet[3333]: E0123 23:55:43.868505 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:43.871267 kubelet[3333]: E0123 23:55:43.871228 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:43.871267 kubelet[3333]: W0123 23:55:43.871262 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:43.871375 kubelet[3333]: E0123 23:55:43.871291 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:43.871764 kubelet[3333]: E0123 23:55:43.871730 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:43.871764 kubelet[3333]: W0123 23:55:43.871757 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:43.871868 kubelet[3333]: E0123 23:55:43.871806 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:43.872309 kubelet[3333]: E0123 23:55:43.872275 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:43.872309 kubelet[3333]: W0123 23:55:43.872302 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:43.872431 kubelet[3333]: E0123 23:55:43.872323 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:43.872812 kubelet[3333]: E0123 23:55:43.872778 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:43.872812 kubelet[3333]: W0123 23:55:43.872806 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:43.872978 kubelet[3333]: E0123 23:55:43.872830 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:43.873313 kubelet[3333]: E0123 23:55:43.873277 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:43.873313 kubelet[3333]: W0123 23:55:43.873306 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:43.873436 kubelet[3333]: E0123 23:55:43.873329 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:43.873782 kubelet[3333]: E0123 23:55:43.873748 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:43.873861 kubelet[3333]: W0123 23:55:43.873779 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:43.873861 kubelet[3333]: E0123 23:55:43.873803 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:43.874449 kubelet[3333]: E0123 23:55:43.874185 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:43.874449 kubelet[3333]: W0123 23:55:43.874212 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:43.874449 kubelet[3333]: E0123 23:55:43.874268 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:43.876573 kubelet[3333]: E0123 23:55:43.875106 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:43.876573 kubelet[3333]: W0123 23:55:43.875139 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:43.876573 kubelet[3333]: E0123 23:55:43.875196 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:43.876573 kubelet[3333]: E0123 23:55:43.875673 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:43.876573 kubelet[3333]: W0123 23:55:43.875690 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:43.876573 kubelet[3333]: E0123 23:55:43.875711 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:43.876573 kubelet[3333]: E0123 23:55:43.876122 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:43.876573 kubelet[3333]: W0123 23:55:43.876137 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:43.876573 kubelet[3333]: E0123 23:55:43.876155 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:43.907368 kubelet[3333]: E0123 23:55:43.907160 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:43.907368 kubelet[3333]: W0123 23:55:43.907202 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:43.907368 kubelet[3333]: E0123 23:55:43.907235 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:43.909021 kubelet[3333]: E0123 23:55:43.908852 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:43.909021 kubelet[3333]: W0123 23:55:43.908926 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:43.909021 kubelet[3333]: E0123 23:55:43.908959 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:43.910412 kubelet[3333]: E0123 23:55:43.910346 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:43.910412 kubelet[3333]: W0123 23:55:43.910400 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:43.910576 kubelet[3333]: E0123 23:55:43.910430 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:43.911786 kubelet[3333]: E0123 23:55:43.911083 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:43.911786 kubelet[3333]: W0123 23:55:43.911117 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:43.911786 kubelet[3333]: E0123 23:55:43.911146 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:43.912064 kubelet[3333]: E0123 23:55:43.912008 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:43.912317 kubelet[3333]: W0123 23:55:43.912033 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:43.912317 kubelet[3333]: E0123 23:55:43.912192 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:43.913261 kubelet[3333]: E0123 23:55:43.913213 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:43.913261 kubelet[3333]: W0123 23:55:43.913255 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:43.913412 kubelet[3333]: E0123 23:55:43.913288 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:43.915321 kubelet[3333]: E0123 23:55:43.915087 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:43.915321 kubelet[3333]: W0123 23:55:43.915253 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:43.915321 kubelet[3333]: E0123 23:55:43.915288 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:43.916393 kubelet[3333]: E0123 23:55:43.916344 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:43.916393 kubelet[3333]: W0123 23:55:43.916380 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:43.916552 kubelet[3333]: E0123 23:55:43.916465 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:43.918637 kubelet[3333]: E0123 23:55:43.918574 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:43.918637 kubelet[3333]: W0123 23:55:43.918619 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:43.918858 kubelet[3333]: E0123 23:55:43.918653 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:43.920603 kubelet[3333]: E0123 23:55:43.920544 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:43.920603 kubelet[3333]: W0123 23:55:43.920587 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:43.920603 kubelet[3333]: E0123 23:55:43.920624 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:43.922814 kubelet[3333]: E0123 23:55:43.922486 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:43.922814 kubelet[3333]: W0123 23:55:43.922528 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:43.922814 kubelet[3333]: E0123 23:55:43.922561 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:43.924061 kubelet[3333]: E0123 23:55:43.924039 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:43.924154 kubelet[3333]: W0123 23:55:43.924068 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:43.924154 kubelet[3333]: E0123 23:55:43.924100 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:43.924935 kubelet[3333]: E0123 23:55:43.924845 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:43.925253 kubelet[3333]: W0123 23:55:43.925182 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:43.925441 kubelet[3333]: E0123 23:55:43.925234 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:43.927277 kubelet[3333]: E0123 23:55:43.927066 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:43.927277 kubelet[3333]: W0123 23:55:43.927276 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:43.927445 kubelet[3333]: E0123 23:55:43.927314 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:43.928937 kubelet[3333]: E0123 23:55:43.928438 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:43.928937 kubelet[3333]: W0123 23:55:43.928534 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:43.928937 kubelet[3333]: E0123 23:55:43.928583 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:43.930689 kubelet[3333]: E0123 23:55:43.930165 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:43.930689 kubelet[3333]: W0123 23:55:43.930254 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:43.930689 kubelet[3333]: E0123 23:55:43.930338 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:43.931842 kubelet[3333]: E0123 23:55:43.931732 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:43.931842 kubelet[3333]: W0123 23:55:43.931820 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:43.933154 kubelet[3333]: E0123 23:55:43.932020 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:43.933406 kubelet[3333]: E0123 23:55:43.933357 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:43.933406 kubelet[3333]: W0123 23:55:43.933396 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:43.933519 kubelet[3333]: E0123 23:55:43.933428 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:44.521936 kubelet[3333]: E0123 23:55:44.521649 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bsxtr" podUID="16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0" Jan 23 23:55:44.883140 kubelet[3333]: E0123 23:55:44.883027 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:44.884941 kubelet[3333]: W0123 23:55:44.884103 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:44.884941 kubelet[3333]: E0123 23:55:44.884173 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:44.885765 kubelet[3333]: E0123 23:55:44.885731 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:44.885881 kubelet[3333]: W0123 23:55:44.885761 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:44.885881 kubelet[3333]: E0123 23:55:44.885833 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:44.886646 kubelet[3333]: E0123 23:55:44.886612 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:44.886731 kubelet[3333]: W0123 23:55:44.886644 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:44.886731 kubelet[3333]: E0123 23:55:44.886695 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:44.887576 kubelet[3333]: E0123 23:55:44.887503 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:44.887576 kubelet[3333]: W0123 23:55:44.887573 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:44.887726 kubelet[3333]: E0123 23:55:44.887602 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:44.888639 kubelet[3333]: E0123 23:55:44.888419 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:44.888639 kubelet[3333]: W0123 23:55:44.888449 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:44.888639 kubelet[3333]: E0123 23:55:44.888474 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:44.889361 kubelet[3333]: E0123 23:55:44.889160 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:44.889361 kubelet[3333]: W0123 23:55:44.889184 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:44.889361 kubelet[3333]: E0123 23:55:44.889209 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:44.889643 kubelet[3333]: E0123 23:55:44.889623 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:44.889754 kubelet[3333]: W0123 23:55:44.889733 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:44.889957 kubelet[3333]: E0123 23:55:44.889860 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:44.890769 kubelet[3333]: E0123 23:55:44.890451 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:44.890769 kubelet[3333]: W0123 23:55:44.890496 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:44.890769 kubelet[3333]: E0123 23:55:44.890538 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:44.891543 kubelet[3333]: E0123 23:55:44.891343 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:44.891543 kubelet[3333]: W0123 23:55:44.891369 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:44.891543 kubelet[3333]: E0123 23:55:44.891394 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:44.892055 kubelet[3333]: E0123 23:55:44.892032 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:44.892319 kubelet[3333]: W0123 23:55:44.892133 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:44.892319 kubelet[3333]: E0123 23:55:44.892163 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:44.892542 kubelet[3333]: E0123 23:55:44.892523 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:44.892659 kubelet[3333]: W0123 23:55:44.892638 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:44.892773 kubelet[3333]: E0123 23:55:44.892751 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:44.893383 kubelet[3333]: E0123 23:55:44.893360 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:44.893661 kubelet[3333]: W0123 23:55:44.893477 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:44.893661 kubelet[3333]: E0123 23:55:44.893507 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:44.893959 kubelet[3333]: E0123 23:55:44.893916 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:44.894058 kubelet[3333]: W0123 23:55:44.894037 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:44.894349 kubelet[3333]: E0123 23:55:44.894161 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:44.895253 kubelet[3333]: E0123 23:55:44.895049 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:44.895253 kubelet[3333]: W0123 23:55:44.895073 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:44.895253 kubelet[3333]: E0123 23:55:44.895097 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:44.895638 kubelet[3333]: E0123 23:55:44.895528 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:44.895881 kubelet[3333]: W0123 23:55:44.895760 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:44.895881 kubelet[3333]: E0123 23:55:44.895790 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:44.924363 kubelet[3333]: E0123 23:55:44.924319 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:44.924363 kubelet[3333]: W0123 23:55:44.924357 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:44.924546 kubelet[3333]: E0123 23:55:44.924389 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:44.924866 kubelet[3333]: E0123 23:55:44.924838 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:44.924971 kubelet[3333]: W0123 23:55:44.924865 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:44.924971 kubelet[3333]: E0123 23:55:44.924908 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:44.925324 kubelet[3333]: E0123 23:55:44.925297 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:44.925324 kubelet[3333]: W0123 23:55:44.925323 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:44.925432 kubelet[3333]: E0123 23:55:44.925346 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:44.925807 kubelet[3333]: E0123 23:55:44.925780 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:44.925877 kubelet[3333]: W0123 23:55:44.925806 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:44.925877 kubelet[3333]: E0123 23:55:44.925829 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:44.926254 kubelet[3333]: E0123 23:55:44.926225 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:44.926319 kubelet[3333]: W0123 23:55:44.926253 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:44.926319 kubelet[3333]: E0123 23:55:44.926279 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:44.926685 kubelet[3333]: E0123 23:55:44.926657 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:44.926766 kubelet[3333]: W0123 23:55:44.926684 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:44.926766 kubelet[3333]: E0123 23:55:44.926708 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:44.927095 kubelet[3333]: E0123 23:55:44.927067 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:44.927095 kubelet[3333]: W0123 23:55:44.927094 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:44.927643 kubelet[3333]: E0123 23:55:44.927117 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:44.928094 kubelet[3333]: E0123 23:55:44.928056 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:44.928094 kubelet[3333]: W0123 23:55:44.928092 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:44.928267 kubelet[3333]: E0123 23:55:44.928124 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:44.928785 kubelet[3333]: E0123 23:55:44.928746 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:44.928931 kubelet[3333]: W0123 23:55:44.928803 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:44.928931 kubelet[3333]: E0123 23:55:44.928832 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:44.930073 kubelet[3333]: E0123 23:55:44.930031 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:44.930220 kubelet[3333]: W0123 23:55:44.930069 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:44.930293 kubelet[3333]: E0123 23:55:44.930221 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:44.930928 kubelet[3333]: E0123 23:55:44.930869 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:44.931011 kubelet[3333]: W0123 23:55:44.930993 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:44.931063 kubelet[3333]: E0123 23:55:44.931022 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:44.931410 kubelet[3333]: E0123 23:55:44.931382 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:44.931489 kubelet[3333]: W0123 23:55:44.931409 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:44.931489 kubelet[3333]: E0123 23:55:44.931430 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:44.932221 kubelet[3333]: E0123 23:55:44.932151 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:44.932221 kubelet[3333]: W0123 23:55:44.932183 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:44.932221 kubelet[3333]: E0123 23:55:44.932211 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:44.932860 kubelet[3333]: E0123 23:55:44.932830 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:44.932987 kubelet[3333]: W0123 23:55:44.932972 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:44.933079 kubelet[3333]: E0123 23:55:44.933000 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:44.934266 kubelet[3333]: E0123 23:55:44.934156 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:44.934539 kubelet[3333]: W0123 23:55:44.934318 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:44.934539 kubelet[3333]: E0123 23:55:44.934352 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:44.936601 kubelet[3333]: E0123 23:55:44.936561 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:44.936601 kubelet[3333]: W0123 23:55:44.936596 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:44.936817 kubelet[3333]: E0123 23:55:44.936626 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:44.937607 kubelet[3333]: E0123 23:55:44.937463 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:44.937607 kubelet[3333]: W0123 23:55:44.937493 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:44.937607 kubelet[3333]: E0123 23:55:44.937539 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:44.939463 kubelet[3333]: E0123 23:55:44.939429 3333 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 23:55:44.939700 kubelet[3333]: W0123 23:55:44.939609 3333 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 23:55:44.939700 kubelet[3333]: E0123 23:55:44.939647 3333 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 23:55:45.067834 containerd[2023]: time="2026-01-23T23:55:45.066359343Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:45.067834 containerd[2023]: time="2026-01-23T23:55:45.067770951Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Jan 23 23:55:45.068620 containerd[2023]: time="2026-01-23T23:55:45.068571327Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:45.072180 containerd[2023]: time="2026-01-23T23:55:45.072116187Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:45.073799 containerd[2023]: time="2026-01-23T23:55:45.073752951Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.516572392s" Jan 23 23:55:45.076949 containerd[2023]: time="2026-01-23T23:55:45.076853631Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 23 23:55:45.093738 containerd[2023]: time="2026-01-23T23:55:45.093673095Z" level=info msg="CreateContainer within sandbox \"b92e77ca68c086ffa3bf6d9ed001aea341883a6be757dab3fca13b79ec4309cf\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 23:55:45.132115 containerd[2023]: time="2026-01-23T23:55:45.132044223Z" level=info msg="CreateContainer within sandbox \"b92e77ca68c086ffa3bf6d9ed001aea341883a6be757dab3fca13b79ec4309cf\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"b5aee3832cca11c4eac48ffb8140f73dc3fa3252c3185bad98fab8590fafadb7\"" Jan 23 23:55:45.135146 containerd[2023]: time="2026-01-23T23:55:45.133433535Z" level=info msg="StartContainer for \"b5aee3832cca11c4eac48ffb8140f73dc3fa3252c3185bad98fab8590fafadb7\"" Jan 23 23:55:45.202330 systemd[1]: Started cri-containerd-b5aee3832cca11c4eac48ffb8140f73dc3fa3252c3185bad98fab8590fafadb7.scope - libcontainer container b5aee3832cca11c4eac48ffb8140f73dc3fa3252c3185bad98fab8590fafadb7. Jan 23 23:55:45.254822 containerd[2023]: time="2026-01-23T23:55:45.254761348Z" level=info msg="StartContainer for \"b5aee3832cca11c4eac48ffb8140f73dc3fa3252c3185bad98fab8590fafadb7\" returns successfully" Jan 23 23:55:45.291262 systemd[1]: cri-containerd-b5aee3832cca11c4eac48ffb8140f73dc3fa3252c3185bad98fab8590fafadb7.scope: Deactivated successfully. Jan 23 23:55:45.569478 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b5aee3832cca11c4eac48ffb8140f73dc3fa3252c3185bad98fab8590fafadb7-rootfs.mount: Deactivated successfully. Jan 23 23:55:45.689246 containerd[2023]: time="2026-01-23T23:55:45.689150478Z" level=info msg="shim disconnected" id=b5aee3832cca11c4eac48ffb8140f73dc3fa3252c3185bad98fab8590fafadb7 namespace=k8s.io Jan 23 23:55:45.689246 containerd[2023]: time="2026-01-23T23:55:45.689290254Z" level=warning msg="cleaning up after shim disconnected" id=b5aee3832cca11c4eac48ffb8140f73dc3fa3252c3185bad98fab8590fafadb7 namespace=k8s.io Jan 23 23:55:45.689246 containerd[2023]: time="2026-01-23T23:55:45.689312706Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:55:45.791100 containerd[2023]: time="2026-01-23T23:55:45.790778155Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 23:55:46.524258 kubelet[3333]: E0123 23:55:46.523619 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bsxtr" podUID="16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0" Jan 23 23:55:48.523628 kubelet[3333]: E0123 23:55:48.522176 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bsxtr" podUID="16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0" Jan 23 23:55:49.482654 containerd[2023]: time="2026-01-23T23:55:49.481171473Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:49.482654 containerd[2023]: time="2026-01-23T23:55:49.482604177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 23 23:55:49.483457 containerd[2023]: time="2026-01-23T23:55:49.483413385Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:49.487085 containerd[2023]: time="2026-01-23T23:55:49.486999813Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:49.488906 containerd[2023]: time="2026-01-23T23:55:49.488830701Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 3.69799329s" Jan 23 23:55:49.489615 containerd[2023]: time="2026-01-23T23:55:49.489509253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 23 23:55:49.497867 containerd[2023]: time="2026-01-23T23:55:49.497788905Z" level=info msg="CreateContainer within sandbox \"b92e77ca68c086ffa3bf6d9ed001aea341883a6be757dab3fca13b79ec4309cf\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 23:55:49.525273 containerd[2023]: time="2026-01-23T23:55:49.525211305Z" level=info msg="CreateContainer within sandbox \"b92e77ca68c086ffa3bf6d9ed001aea341883a6be757dab3fca13b79ec4309cf\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"105d4c10b80c60f75afb06a57fd79f9f45d633559a56f0534f002b5331f75dd9\"" Jan 23 23:55:49.527415 containerd[2023]: time="2026-01-23T23:55:49.527354853Z" level=info msg="StartContainer for \"105d4c10b80c60f75afb06a57fd79f9f45d633559a56f0534f002b5331f75dd9\"" Jan 23 23:55:49.593224 systemd[1]: Started cri-containerd-105d4c10b80c60f75afb06a57fd79f9f45d633559a56f0534f002b5331f75dd9.scope - libcontainer container 105d4c10b80c60f75afb06a57fd79f9f45d633559a56f0534f002b5331f75dd9. Jan 23 23:55:49.651673 containerd[2023]: time="2026-01-23T23:55:49.651499198Z" level=info msg="StartContainer for \"105d4c10b80c60f75afb06a57fd79f9f45d633559a56f0534f002b5331f75dd9\" returns successfully" Jan 23 23:55:50.522769 kubelet[3333]: E0123 23:55:50.521958 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-bsxtr" podUID="16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0" Jan 23 23:55:50.706806 containerd[2023]: time="2026-01-23T23:55:50.706726895Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 23:55:50.711875 systemd[1]: cri-containerd-105d4c10b80c60f75afb06a57fd79f9f45d633559a56f0534f002b5331f75dd9.scope: Deactivated successfully. Jan 23 23:55:50.754474 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-105d4c10b80c60f75afb06a57fd79f9f45d633559a56f0534f002b5331f75dd9-rootfs.mount: Deactivated successfully. Jan 23 23:55:50.797212 kubelet[3333]: I0123 23:55:50.797095 3333 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 23 23:55:50.889242 systemd[1]: Created slice kubepods-burstable-pod75d141a9_546a_4b46_adcf_a6cd7a6e3073.slice - libcontainer container kubepods-burstable-pod75d141a9_546a_4b46_adcf_a6cd7a6e3073.slice. Jan 23 23:55:50.937405 systemd[1]: Created slice kubepods-burstable-pode585a60e_07e9_4e0a_95ee_73be5aa0422a.slice - libcontainer container kubepods-burstable-pode585a60e_07e9_4e0a_95ee_73be5aa0422a.slice. Jan 23 23:55:50.968804 kubelet[3333]: I0123 23:55:50.968389 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/e585a60e-07e9-4e0a-95ee-73be5aa0422a-config-volume\") pod \"coredns-66bc5c9577-5wd66\" (UID: \"e585a60e-07e9-4e0a-95ee-73be5aa0422a\") " pod="kube-system/coredns-66bc5c9577-5wd66" Jan 23 23:55:50.968804 kubelet[3333]: I0123 23:55:50.968456 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mbr9\" (UniqueName: \"kubernetes.io/projected/e585a60e-07e9-4e0a-95ee-73be5aa0422a-kube-api-access-7mbr9\") pod \"coredns-66bc5c9577-5wd66\" (UID: \"e585a60e-07e9-4e0a-95ee-73be5aa0422a\") " pod="kube-system/coredns-66bc5c9577-5wd66" Jan 23 23:55:50.968804 kubelet[3333]: I0123 23:55:50.968503 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2dgth\" (UniqueName: \"kubernetes.io/projected/75d141a9-546a-4b46-adcf-a6cd7a6e3073-kube-api-access-2dgth\") pod \"coredns-66bc5c9577-jl7gt\" (UID: \"75d141a9-546a-4b46-adcf-a6cd7a6e3073\") " pod="kube-system/coredns-66bc5c9577-jl7gt" Jan 23 23:55:50.968804 kubelet[3333]: I0123 23:55:50.968548 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/75d141a9-546a-4b46-adcf-a6cd7a6e3073-config-volume\") pod \"coredns-66bc5c9577-jl7gt\" (UID: \"75d141a9-546a-4b46-adcf-a6cd7a6e3073\") " pod="kube-system/coredns-66bc5c9577-jl7gt" Jan 23 23:55:50.988085 systemd[1]: Created slice kubepods-besteffort-pod88895574_7d47_4441_9e70_eebbca18d915.slice - libcontainer container kubepods-besteffort-pod88895574_7d47_4441_9e70_eebbca18d915.slice. Jan 23 23:55:51.076135 kubelet[3333]: I0123 23:55:51.071204 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/88895574-7d47-4441-9e70-eebbca18d915-calico-apiserver-certs\") pod \"calico-apiserver-58854c8f84-dfgrz\" (UID: \"88895574-7d47-4441-9e70-eebbca18d915\") " pod="calico-apiserver/calico-apiserver-58854c8f84-dfgrz" Jan 23 23:55:51.076135 kubelet[3333]: I0123 23:55:51.071309 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5tzl8\" (UniqueName: \"kubernetes.io/projected/88895574-7d47-4441-9e70-eebbca18d915-kube-api-access-5tzl8\") pod \"calico-apiserver-58854c8f84-dfgrz\" (UID: \"88895574-7d47-4441-9e70-eebbca18d915\") " pod="calico-apiserver/calico-apiserver-58854c8f84-dfgrz" Jan 23 23:55:51.084067 systemd[1]: Created slice kubepods-besteffort-podb627f7db_d96f_4cdc_9084_8b79e8e215fb.slice - libcontainer container kubepods-besteffort-podb627f7db_d96f_4cdc_9084_8b79e8e215fb.slice. Jan 23 23:55:51.163462 systemd[1]: Created slice kubepods-besteffort-pod3ac30c2d_dd8c_4060_a356_77e0062bc1c4.slice - libcontainer container kubepods-besteffort-pod3ac30c2d_dd8c_4060_a356_77e0062bc1c4.slice. Jan 23 23:55:51.173092 kubelet[3333]: I0123 23:55:51.172466 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/b627f7db-d96f-4cdc-9084-8b79e8e215fb-config\") pod \"goldmane-7c778bb748-zhdzf\" (UID: \"b627f7db-d96f-4cdc-9084-8b79e8e215fb\") " pod="calico-system/goldmane-7c778bb748-zhdzf" Jan 23 23:55:51.173092 kubelet[3333]: I0123 23:55:51.172559 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/b627f7db-d96f-4cdc-9084-8b79e8e215fb-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-zhdzf\" (UID: \"b627f7db-d96f-4cdc-9084-8b79e8e215fb\") " pod="calico-system/goldmane-7c778bb748-zhdzf" Jan 23 23:55:51.173092 kubelet[3333]: I0123 23:55:51.172604 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/b627f7db-d96f-4cdc-9084-8b79e8e215fb-goldmane-key-pair\") pod \"goldmane-7c778bb748-zhdzf\" (UID: \"b627f7db-d96f-4cdc-9084-8b79e8e215fb\") " pod="calico-system/goldmane-7c778bb748-zhdzf" Jan 23 23:55:51.173092 kubelet[3333]: I0123 23:55:51.172643 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrq75\" (UniqueName: \"kubernetes.io/projected/b627f7db-d96f-4cdc-9084-8b79e8e215fb-kube-api-access-wrq75\") pod \"goldmane-7c778bb748-zhdzf\" (UID: \"b627f7db-d96f-4cdc-9084-8b79e8e215fb\") " pod="calico-system/goldmane-7c778bb748-zhdzf" Jan 23 23:55:51.173092 kubelet[3333]: I0123 23:55:51.172682 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8d688\" (UniqueName: \"kubernetes.io/projected/3ac30c2d-dd8c-4060-a356-77e0062bc1c4-kube-api-access-8d688\") pod \"calico-apiserver-58854c8f84-79vx7\" (UID: \"3ac30c2d-dd8c-4060-a356-77e0062bc1c4\") " pod="calico-apiserver/calico-apiserver-58854c8f84-79vx7" Jan 23 23:55:51.187503 kubelet[3333]: I0123 23:55:51.172722 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/3ac30c2d-dd8c-4060-a356-77e0062bc1c4-calico-apiserver-certs\") pod \"calico-apiserver-58854c8f84-79vx7\" (UID: \"3ac30c2d-dd8c-4060-a356-77e0062bc1c4\") " pod="calico-apiserver/calico-apiserver-58854c8f84-79vx7" Jan 23 23:55:51.240773 systemd[1]: Created slice kubepods-besteffort-pod57f38bb4_b4b4_4bf6_8661_2d992d293396.slice - libcontainer container kubepods-besteffort-pod57f38bb4_b4b4_4bf6_8661_2d992d293396.slice. Jan 23 23:55:51.275534 kubelet[3333]: I0123 23:55:51.273687 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57f38bb4-b4b4-4bf6-8661-2d992d293396-whisker-ca-bundle\") pod \"whisker-675d86896d-jvm4v\" (UID: \"57f38bb4-b4b4-4bf6-8661-2d992d293396\") " pod="calico-system/whisker-675d86896d-jvm4v" Jan 23 23:55:51.275534 kubelet[3333]: I0123 23:55:51.273925 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/57f38bb4-b4b4-4bf6-8661-2d992d293396-whisker-backend-key-pair\") pod \"whisker-675d86896d-jvm4v\" (UID: \"57f38bb4-b4b4-4bf6-8661-2d992d293396\") " pod="calico-system/whisker-675d86896d-jvm4v" Jan 23 23:55:51.275534 kubelet[3333]: I0123 23:55:51.273973 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szhs5\" (UniqueName: \"kubernetes.io/projected/57f38bb4-b4b4-4bf6-8661-2d992d293396-kube-api-access-szhs5\") pod \"whisker-675d86896d-jvm4v\" (UID: \"57f38bb4-b4b4-4bf6-8661-2d992d293396\") " pod="calico-system/whisker-675d86896d-jvm4v" Jan 23 23:55:51.307239 containerd[2023]: time="2026-01-23T23:55:51.307160482Z" level=info msg="shim disconnected" id=105d4c10b80c60f75afb06a57fd79f9f45d633559a56f0534f002b5331f75dd9 namespace=k8s.io Jan 23 23:55:51.309012 containerd[2023]: time="2026-01-23T23:55:51.308938678Z" level=warning msg="cleaning up after shim disconnected" id=105d4c10b80c60f75afb06a57fd79f9f45d633559a56f0534f002b5331f75dd9 namespace=k8s.io Jan 23 23:55:51.309241 containerd[2023]: time="2026-01-23T23:55:51.309210370Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:55:51.311212 containerd[2023]: time="2026-01-23T23:55:51.311159950Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jl7gt,Uid:75d141a9-546a-4b46-adcf-a6cd7a6e3073,Namespace:kube-system,Attempt:0,}" Jan 23 23:55:51.315159 containerd[2023]: time="2026-01-23T23:55:51.315090850Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5wd66,Uid:e585a60e-07e9-4e0a-95ee-73be5aa0422a,Namespace:kube-system,Attempt:0,}" Jan 23 23:55:51.320326 containerd[2023]: time="2026-01-23T23:55:51.318386566Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58854c8f84-dfgrz,Uid:88895574-7d47-4441-9e70-eebbca18d915,Namespace:calico-apiserver,Attempt:0,}" Jan 23 23:55:51.340420 systemd[1]: Created slice kubepods-besteffort-pod32710301_53d1_443d_ade3_ac9179beb56f.slice - libcontainer container kubepods-besteffort-pod32710301_53d1_443d_ade3_ac9179beb56f.slice. Jan 23 23:55:51.375437 kubelet[3333]: I0123 23:55:51.375384 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/32710301-53d1-443d-ade3-ac9179beb56f-tigera-ca-bundle\") pod \"calico-kube-controllers-54c598b4dd-zn6ss\" (UID: \"32710301-53d1-443d-ade3-ac9179beb56f\") " pod="calico-system/calico-kube-controllers-54c598b4dd-zn6ss" Jan 23 23:55:51.376750 kubelet[3333]: I0123 23:55:51.376644 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5trnq\" (UniqueName: \"kubernetes.io/projected/32710301-53d1-443d-ade3-ac9179beb56f-kube-api-access-5trnq\") pod \"calico-kube-controllers-54c598b4dd-zn6ss\" (UID: \"32710301-53d1-443d-ade3-ac9179beb56f\") " pod="calico-system/calico-kube-controllers-54c598b4dd-zn6ss" Jan 23 23:55:51.377204 containerd[2023]: time="2026-01-23T23:55:51.376098466Z" level=warning msg="cleanup warnings time=\"2026-01-23T23:55:51Z\" level=warning msg=\"failed to remove runc container\" error=\"runc did not terminate successfully: exit status 255: \" runtime=io.containerd.runc.v2\n" namespace=k8s.io Jan 23 23:55:51.431524 containerd[2023]: time="2026-01-23T23:55:51.430583267Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-zhdzf,Uid:b627f7db-d96f-4cdc-9084-8b79e8e215fb,Namespace:calico-system,Attempt:0,}" Jan 23 23:55:51.502462 containerd[2023]: time="2026-01-23T23:55:51.502384211Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58854c8f84-79vx7,Uid:3ac30c2d-dd8c-4060-a356-77e0062bc1c4,Namespace:calico-apiserver,Attempt:0,}" Jan 23 23:55:51.567646 containerd[2023]: time="2026-01-23T23:55:51.567064091Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-675d86896d-jvm4v,Uid:57f38bb4-b4b4-4bf6-8661-2d992d293396,Namespace:calico-system,Attempt:0,}" Jan 23 23:55:51.651224 containerd[2023]: time="2026-01-23T23:55:51.651044652Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54c598b4dd-zn6ss,Uid:32710301-53d1-443d-ade3-ac9179beb56f,Namespace:calico-system,Attempt:0,}" Jan 23 23:55:51.733310 containerd[2023]: time="2026-01-23T23:55:51.733136004Z" level=error msg="Failed to destroy network for sandbox \"b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:51.737694 containerd[2023]: time="2026-01-23T23:55:51.736685244Z" level=error msg="encountered an error cleaning up failed sandbox \"b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:51.737694 containerd[2023]: time="2026-01-23T23:55:51.737188344Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58854c8f84-dfgrz,Uid:88895574-7d47-4441-9e70-eebbca18d915,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:51.738423 kubelet[3333]: E0123 23:55:51.738338 3333 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:51.739530 kubelet[3333]: E0123 23:55:51.738456 3333 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58854c8f84-dfgrz" Jan 23 23:55:51.739530 kubelet[3333]: E0123 23:55:51.738503 3333 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58854c8f84-dfgrz" Jan 23 23:55:51.739530 kubelet[3333]: E0123 23:55:51.738625 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-58854c8f84-dfgrz_calico-apiserver(88895574-7d47-4441-9e70-eebbca18d915)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-58854c8f84-dfgrz_calico-apiserver(88895574-7d47-4441-9e70-eebbca18d915)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58854c8f84-dfgrz" podUID="88895574-7d47-4441-9e70-eebbca18d915" Jan 23 23:55:51.774051 containerd[2023]: time="2026-01-23T23:55:51.771042204Z" level=error msg="Failed to destroy network for sandbox \"ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:51.789920 containerd[2023]: time="2026-01-23T23:55:51.788065656Z" level=error msg="encountered an error cleaning up failed sandbox \"ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:51.789920 containerd[2023]: time="2026-01-23T23:55:51.788177148Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jl7gt,Uid:75d141a9-546a-4b46-adcf-a6cd7a6e3073,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:51.799461 kubelet[3333]: E0123 23:55:51.797115 3333 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:51.799461 kubelet[3333]: E0123 23:55:51.797197 3333 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-jl7gt" Jan 23 23:55:51.799461 kubelet[3333]: E0123 23:55:51.797234 3333 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-jl7gt" Jan 23 23:55:51.799799 kubelet[3333]: E0123 23:55:51.797324 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-jl7gt_kube-system(75d141a9-546a-4b46-adcf-a6cd7a6e3073)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-jl7gt_kube-system(75d141a9-546a-4b46-adcf-a6cd7a6e3073)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-jl7gt" podUID="75d141a9-546a-4b46-adcf-a6cd7a6e3073" Jan 23 23:55:51.812105 containerd[2023]: time="2026-01-23T23:55:51.809005104Z" level=error msg="Failed to destroy network for sandbox \"4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:51.819137 containerd[2023]: time="2026-01-23T23:55:51.812701968Z" level=error msg="encountered an error cleaning up failed sandbox \"4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:51.819137 containerd[2023]: time="2026-01-23T23:55:51.812799084Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5wd66,Uid:e585a60e-07e9-4e0a-95ee-73be5aa0422a,Namespace:kube-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:51.816088 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c-shm.mount: Deactivated successfully. Jan 23 23:55:51.819756 kubelet[3333]: E0123 23:55:51.814339 3333 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:51.819756 kubelet[3333]: E0123 23:55:51.816796 3333 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5wd66" Jan 23 23:55:51.819756 kubelet[3333]: E0123 23:55:51.816882 3333 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-5wd66" Jan 23 23:55:51.816292 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702-shm.mount: Deactivated successfully. Jan 23 23:55:51.820064 kubelet[3333]: E0123 23:55:51.819009 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-5wd66_kube-system(e585a60e-07e9-4e0a-95ee-73be5aa0422a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-5wd66_kube-system(e585a60e-07e9-4e0a-95ee-73be5aa0422a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-5wd66" podUID="e585a60e-07e9-4e0a-95ee-73be5aa0422a" Jan 23 23:55:51.865489 kubelet[3333]: I0123 23:55:51.864371 3333 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" Jan 23 23:55:51.867814 containerd[2023]: time="2026-01-23T23:55:51.867545257Z" level=info msg="StopPodSandbox for \"b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0\"" Jan 23 23:55:51.868107 containerd[2023]: time="2026-01-23T23:55:51.867872053Z" level=info msg="Ensure that sandbox b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0 in task-service has been cleanup successfully" Jan 23 23:55:51.876549 containerd[2023]: time="2026-01-23T23:55:51.876025021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 23:55:51.889437 kubelet[3333]: I0123 23:55:51.888963 3333 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" Jan 23 23:55:51.893374 containerd[2023]: time="2026-01-23T23:55:51.892450837Z" level=info msg="StopPodSandbox for \"ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702\"" Jan 23 23:55:51.894341 containerd[2023]: time="2026-01-23T23:55:51.894053869Z" level=info msg="Ensure that sandbox ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702 in task-service has been cleanup successfully" Jan 23 23:55:51.980196 containerd[2023]: time="2026-01-23T23:55:51.980117053Z" level=error msg="Failed to destroy network for sandbox \"534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:51.987928 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485-shm.mount: Deactivated successfully. Jan 23 23:55:51.990202 containerd[2023]: time="2026-01-23T23:55:51.989410537Z" level=error msg="encountered an error cleaning up failed sandbox \"534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:51.990420 containerd[2023]: time="2026-01-23T23:55:51.990314461Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-zhdzf,Uid:b627f7db-d96f-4cdc-9084-8b79e8e215fb,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:51.990800 kubelet[3333]: E0123 23:55:51.990752 3333 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:51.993556 kubelet[3333]: E0123 23:55:51.992012 3333 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-zhdzf" Jan 23 23:55:51.993556 kubelet[3333]: E0123 23:55:51.992075 3333 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-zhdzf" Jan 23 23:55:51.993556 kubelet[3333]: E0123 23:55:51.992172 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-zhdzf_calico-system(b627f7db-d96f-4cdc-9084-8b79e8e215fb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-zhdzf_calico-system(b627f7db-d96f-4cdc-9084-8b79e8e215fb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-zhdzf" podUID="b627f7db-d96f-4cdc-9084-8b79e8e215fb" Jan 23 23:55:52.013201 containerd[2023]: time="2026-01-23T23:55:52.013132293Z" level=error msg="Failed to destroy network for sandbox \"1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:52.017806 containerd[2023]: time="2026-01-23T23:55:52.017740509Z" level=error msg="encountered an error cleaning up failed sandbox \"1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:52.019759 containerd[2023]: time="2026-01-23T23:55:52.019692345Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58854c8f84-79vx7,Uid:3ac30c2d-dd8c-4060-a356-77e0062bc1c4,Namespace:calico-apiserver,Attempt:0,} failed, error" error="failed to setup network for sandbox \"1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:52.020329 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09-shm.mount: Deactivated successfully. Jan 23 23:55:52.020961 kubelet[3333]: E0123 23:55:52.020909 3333 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:52.021153 kubelet[3333]: E0123 23:55:52.021119 3333 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58854c8f84-79vx7" Jan 23 23:55:52.021749 kubelet[3333]: E0123 23:55:52.021287 3333 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-58854c8f84-79vx7" Jan 23 23:55:52.021749 kubelet[3333]: E0123 23:55:52.021395 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-58854c8f84-79vx7_calico-apiserver(3ac30c2d-dd8c-4060-a356-77e0062bc1c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-58854c8f84-79vx7_calico-apiserver(3ac30c2d-dd8c-4060-a356-77e0062bc1c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58854c8f84-79vx7" podUID="3ac30c2d-dd8c-4060-a356-77e0062bc1c4" Jan 23 23:55:52.049191 containerd[2023]: time="2026-01-23T23:55:52.049125802Z" level=error msg="Failed to destroy network for sandbox \"949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:52.049680 containerd[2023]: time="2026-01-23T23:55:52.049619422Z" level=error msg="StopPodSandbox for \"ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702\" failed" error="failed to destroy network for sandbox \"ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:52.050279 kubelet[3333]: E0123 23:55:52.049943 3333 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" Jan 23 23:55:52.050279 kubelet[3333]: E0123 23:55:52.050029 3333 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702"} Jan 23 23:55:52.050279 kubelet[3333]: E0123 23:55:52.050118 3333 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"75d141a9-546a-4b46-adcf-a6cd7a6e3073\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:55:52.050279 kubelet[3333]: E0123 23:55:52.050173 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"75d141a9-546a-4b46-adcf-a6cd7a6e3073\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-jl7gt" podUID="75d141a9-546a-4b46-adcf-a6cd7a6e3073" Jan 23 23:55:52.058108 containerd[2023]: time="2026-01-23T23:55:52.054479782Z" level=error msg="encountered an error cleaning up failed sandbox \"949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:52.058108 containerd[2023]: time="2026-01-23T23:55:52.056135434Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-675d86896d-jvm4v,Uid:57f38bb4-b4b4-4bf6-8661-2d992d293396,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:52.058377 kubelet[3333]: E0123 23:55:52.056638 3333 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:52.058377 kubelet[3333]: E0123 23:55:52.056709 3333 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-675d86896d-jvm4v" Jan 23 23:55:52.058377 kubelet[3333]: E0123 23:55:52.056742 3333 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-675d86896d-jvm4v" Jan 23 23:55:52.055268 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191-shm.mount: Deactivated successfully. Jan 23 23:55:52.058655 kubelet[3333]: E0123 23:55:52.056832 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-675d86896d-jvm4v_calico-system(57f38bb4-b4b4-4bf6-8661-2d992d293396)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-675d86896d-jvm4v_calico-system(57f38bb4-b4b4-4bf6-8661-2d992d293396)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-675d86896d-jvm4v" podUID="57f38bb4-b4b4-4bf6-8661-2d992d293396" Jan 23 23:55:52.068922 containerd[2023]: time="2026-01-23T23:55:52.067384150Z" level=error msg="StopPodSandbox for \"b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0\" failed" error="failed to destroy network for sandbox \"b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:52.069052 kubelet[3333]: E0123 23:55:52.067796 3333 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" Jan 23 23:55:52.069052 kubelet[3333]: E0123 23:55:52.067949 3333 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0"} Jan 23 23:55:52.069052 kubelet[3333]: E0123 23:55:52.068038 3333 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"88895574-7d47-4441-9e70-eebbca18d915\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:55:52.069052 kubelet[3333]: E0123 23:55:52.068086 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"88895574-7d47-4441-9e70-eebbca18d915\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58854c8f84-dfgrz" podUID="88895574-7d47-4441-9e70-eebbca18d915" Jan 23 23:55:52.094956 containerd[2023]: time="2026-01-23T23:55:52.094859494Z" level=error msg="Failed to destroy network for sandbox \"49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:52.095619 containerd[2023]: time="2026-01-23T23:55:52.095557774Z" level=error msg="encountered an error cleaning up failed sandbox \"49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:52.095718 containerd[2023]: time="2026-01-23T23:55:52.095644810Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54c598b4dd-zn6ss,Uid:32710301-53d1-443d-ade3-ac9179beb56f,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:52.096175 kubelet[3333]: E0123 23:55:52.096106 3333 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:52.096284 kubelet[3333]: E0123 23:55:52.096203 3333 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54c598b4dd-zn6ss" Jan 23 23:55:52.096284 kubelet[3333]: E0123 23:55:52.096237 3333 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-54c598b4dd-zn6ss" Jan 23 23:55:52.096410 kubelet[3333]: E0123 23:55:52.096319 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-54c598b4dd-zn6ss_calico-system(32710301-53d1-443d-ade3-ac9179beb56f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-54c598b4dd-zn6ss_calico-system(32710301-53d1-443d-ade3-ac9179beb56f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54c598b4dd-zn6ss" podUID="32710301-53d1-443d-ade3-ac9179beb56f" Jan 23 23:55:52.536182 systemd[1]: Created slice kubepods-besteffort-pod16aacbf4_be26_43d8_a2e1_8bb1a4ed82d0.slice - libcontainer container kubepods-besteffort-pod16aacbf4_be26_43d8_a2e1_8bb1a4ed82d0.slice. Jan 23 23:55:52.543655 containerd[2023]: time="2026-01-23T23:55:52.543156072Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bsxtr,Uid:16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0,Namespace:calico-system,Attempt:0,}" Jan 23 23:55:52.640770 containerd[2023]: time="2026-01-23T23:55:52.640699537Z" level=error msg="Failed to destroy network for sandbox \"13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:52.641517 containerd[2023]: time="2026-01-23T23:55:52.641426605Z" level=error msg="encountered an error cleaning up failed sandbox \"13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059\", marking sandbox state as SANDBOX_UNKNOWN" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:52.641614 containerd[2023]: time="2026-01-23T23:55:52.641562553Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bsxtr,Uid:16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0,Namespace:calico-system,Attempt:0,} failed, error" error="failed to setup network for sandbox \"13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:52.644198 kubelet[3333]: E0123 23:55:52.641936 3333 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:52.644198 kubelet[3333]: E0123 23:55:52.642011 3333 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bsxtr" Jan 23 23:55:52.644198 kubelet[3333]: E0123 23:55:52.642043 3333 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-bsxtr" Jan 23 23:55:52.644496 kubelet[3333]: E0123 23:55:52.642129 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-bsxtr_calico-system(16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-bsxtr_calico-system(16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bsxtr" podUID="16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0" Jan 23 23:55:52.757470 systemd[1]: run-containerd-io.containerd.grpc.v1.cri-sandboxes-49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382-shm.mount: Deactivated successfully. Jan 23 23:55:52.895052 kubelet[3333]: I0123 23:55:52.893593 3333 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" Jan 23 23:55:52.898389 containerd[2023]: time="2026-01-23T23:55:52.897674726Z" level=info msg="StopPodSandbox for \"13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059\"" Jan 23 23:55:52.898389 containerd[2023]: time="2026-01-23T23:55:52.898031678Z" level=info msg="Ensure that sandbox 13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059 in task-service has been cleanup successfully" Jan 23 23:55:52.900697 kubelet[3333]: I0123 23:55:52.900554 3333 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" Jan 23 23:55:52.901786 containerd[2023]: time="2026-01-23T23:55:52.901718510Z" level=info msg="StopPodSandbox for \"949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191\"" Jan 23 23:55:52.902557 containerd[2023]: time="2026-01-23T23:55:52.902100518Z" level=info msg="Ensure that sandbox 949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191 in task-service has been cleanup successfully" Jan 23 23:55:52.908282 kubelet[3333]: I0123 23:55:52.907612 3333 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" Jan 23 23:55:52.909383 containerd[2023]: time="2026-01-23T23:55:52.909284450Z" level=info msg="StopPodSandbox for \"534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485\"" Jan 23 23:55:52.911750 containerd[2023]: time="2026-01-23T23:55:52.911361674Z" level=info msg="Ensure that sandbox 534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485 in task-service has been cleanup successfully" Jan 23 23:55:52.913872 kubelet[3333]: I0123 23:55:52.913805 3333 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" Jan 23 23:55:52.917481 containerd[2023]: time="2026-01-23T23:55:52.917426594Z" level=info msg="StopPodSandbox for \"4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c\"" Jan 23 23:55:52.918585 containerd[2023]: time="2026-01-23T23:55:52.918225842Z" level=info msg="Ensure that sandbox 4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c in task-service has been cleanup successfully" Jan 23 23:55:52.930296 kubelet[3333]: I0123 23:55:52.930123 3333 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" Jan 23 23:55:52.932218 containerd[2023]: time="2026-01-23T23:55:52.932142038Z" level=info msg="StopPodSandbox for \"49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382\"" Jan 23 23:55:52.932646 containerd[2023]: time="2026-01-23T23:55:52.932464322Z" level=info msg="Ensure that sandbox 49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382 in task-service has been cleanup successfully" Jan 23 23:55:52.951438 kubelet[3333]: I0123 23:55:52.951282 3333 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" Jan 23 23:55:52.959001 containerd[2023]: time="2026-01-23T23:55:52.958948634Z" level=info msg="StopPodSandbox for \"1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09\"" Jan 23 23:55:52.960258 containerd[2023]: time="2026-01-23T23:55:52.959793386Z" level=info msg="Ensure that sandbox 1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09 in task-service has been cleanup successfully" Jan 23 23:55:53.052246 containerd[2023]: time="2026-01-23T23:55:53.052168967Z" level=error msg="StopPodSandbox for \"1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09\" failed" error="failed to destroy network for sandbox \"1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:53.052559 kubelet[3333]: E0123 23:55:53.052463 3333 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" Jan 23 23:55:53.052559 kubelet[3333]: E0123 23:55:53.052527 3333 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09"} Jan 23 23:55:53.052678 kubelet[3333]: E0123 23:55:53.052578 3333 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"3ac30c2d-dd8c-4060-a356-77e0062bc1c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:55:53.052678 kubelet[3333]: E0123 23:55:53.052634 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"3ac30c2d-dd8c-4060-a356-77e0062bc1c4\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-58854c8f84-79vx7" podUID="3ac30c2d-dd8c-4060-a356-77e0062bc1c4" Jan 23 23:55:53.061719 containerd[2023]: time="2026-01-23T23:55:53.061636091Z" level=error msg="StopPodSandbox for \"13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059\" failed" error="failed to destroy network for sandbox \"13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:53.062099 kubelet[3333]: E0123 23:55:53.061975 3333 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" Jan 23 23:55:53.062099 kubelet[3333]: E0123 23:55:53.062046 3333 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059"} Jan 23 23:55:53.064071 kubelet[3333]: E0123 23:55:53.062099 3333 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:55:53.064071 kubelet[3333]: E0123 23:55:53.062147 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-bsxtr" podUID="16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0" Jan 23 23:55:53.117249 containerd[2023]: time="2026-01-23T23:55:53.117153635Z" level=error msg="StopPodSandbox for \"4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c\" failed" error="failed to destroy network for sandbox \"4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:53.117801 kubelet[3333]: E0123 23:55:53.117506 3333 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" Jan 23 23:55:53.117801 kubelet[3333]: E0123 23:55:53.117575 3333 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c"} Jan 23 23:55:53.117801 kubelet[3333]: E0123 23:55:53.117634 3333 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"e585a60e-07e9-4e0a-95ee-73be5aa0422a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:55:53.117801 kubelet[3333]: E0123 23:55:53.117680 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"e585a60e-07e9-4e0a-95ee-73be5aa0422a\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-5wd66" podUID="e585a60e-07e9-4e0a-95ee-73be5aa0422a" Jan 23 23:55:53.123491 containerd[2023]: time="2026-01-23T23:55:53.123402635Z" level=error msg="StopPodSandbox for \"949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191\" failed" error="failed to destroy network for sandbox \"949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:53.123791 kubelet[3333]: E0123 23:55:53.123730 3333 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" Jan 23 23:55:53.123918 kubelet[3333]: E0123 23:55:53.123803 3333 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191"} Jan 23 23:55:53.123918 kubelet[3333]: E0123 23:55:53.123855 3333 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"57f38bb4-b4b4-4bf6-8661-2d992d293396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:55:53.124145 kubelet[3333]: E0123 23:55:53.123925 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"57f38bb4-b4b4-4bf6-8661-2d992d293396\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-675d86896d-jvm4v" podUID="57f38bb4-b4b4-4bf6-8661-2d992d293396" Jan 23 23:55:53.140371 containerd[2023]: time="2026-01-23T23:55:53.140223743Z" level=error msg="StopPodSandbox for \"534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485\" failed" error="failed to destroy network for sandbox \"534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:53.141241 kubelet[3333]: E0123 23:55:53.140675 3333 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" Jan 23 23:55:53.141241 kubelet[3333]: E0123 23:55:53.140741 3333 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485"} Jan 23 23:55:53.141241 kubelet[3333]: E0123 23:55:53.140791 3333 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"b627f7db-d96f-4cdc-9084-8b79e8e215fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:55:53.141241 kubelet[3333]: E0123 23:55:53.140841 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"b627f7db-d96f-4cdc-9084-8b79e8e215fb\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-zhdzf" podUID="b627f7db-d96f-4cdc-9084-8b79e8e215fb" Jan 23 23:55:53.149022 containerd[2023]: time="2026-01-23T23:55:53.148075271Z" level=error msg="StopPodSandbox for \"49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382\" failed" error="failed to destroy network for sandbox \"49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 23:55:53.149158 kubelet[3333]: E0123 23:55:53.148526 3333 log.go:32] "StopPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to destroy network for sandbox \"49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382\": plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" podSandboxID="49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" Jan 23 23:55:53.149158 kubelet[3333]: E0123 23:55:53.148617 3333 kuberuntime_manager.go:1665] "Failed to stop sandbox" podSandboxID={"Type":"containerd","ID":"49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382"} Jan 23 23:55:53.149158 kubelet[3333]: E0123 23:55:53.148694 3333 kuberuntime_manager.go:1233] "killPodWithSyncResult failed" err="failed to \"KillPodSandbox\" for \"32710301-53d1-443d-ade3-ac9179beb56f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" Jan 23 23:55:53.149158 kubelet[3333]: E0123 23:55:53.148770 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"KillPodSandbox\" for \"32710301-53d1-443d-ade3-ac9179beb56f\" with KillPodSandboxError: \"rpc error: code = Unknown desc = failed to destroy network for sandbox \\\"49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382\\\": plugin type=\\\"calico\\\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-54c598b4dd-zn6ss" podUID="32710301-53d1-443d-ade3-ac9179beb56f" Jan 23 23:55:59.923143 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2175739376.mount: Deactivated successfully. Jan 23 23:55:59.987550 containerd[2023]: time="2026-01-23T23:55:59.987281937Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:59.989291 containerd[2023]: time="2026-01-23T23:55:59.988968801Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 23 23:55:59.993936 containerd[2023]: time="2026-01-23T23:55:59.991826781Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:59.995959 containerd[2023]: time="2026-01-23T23:55:59.995293209Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 23:55:59.998445 containerd[2023]: time="2026-01-23T23:55:59.998359857Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 8.12185738s" Jan 23 23:55:59.998445 containerd[2023]: time="2026-01-23T23:55:59.998442669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 23 23:56:00.054184 containerd[2023]: time="2026-01-23T23:56:00.054115145Z" level=info msg="CreateContainer within sandbox \"b92e77ca68c086ffa3bf6d9ed001aea341883a6be757dab3fca13b79ec4309cf\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 23:56:00.083528 containerd[2023]: time="2026-01-23T23:56:00.083347086Z" level=info msg="CreateContainer within sandbox \"b92e77ca68c086ffa3bf6d9ed001aea341883a6be757dab3fca13b79ec4309cf\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4dfa3393088b2e1a58127cfb7a9dbee7acf8c080734b0bf1ec7927bbe768af7c\"" Jan 23 23:56:00.086239 containerd[2023]: time="2026-01-23T23:56:00.086167194Z" level=info msg="StartContainer for \"4dfa3393088b2e1a58127cfb7a9dbee7acf8c080734b0bf1ec7927bbe768af7c\"" Jan 23 23:56:00.193249 systemd[1]: Started cri-containerd-4dfa3393088b2e1a58127cfb7a9dbee7acf8c080734b0bf1ec7927bbe768af7c.scope - libcontainer container 4dfa3393088b2e1a58127cfb7a9dbee7acf8c080734b0bf1ec7927bbe768af7c. Jan 23 23:56:00.303943 containerd[2023]: time="2026-01-23T23:56:00.303409267Z" level=info msg="StartContainer for \"4dfa3393088b2e1a58127cfb7a9dbee7acf8c080734b0bf1ec7927bbe768af7c\" returns successfully" Jan 23 23:56:00.564233 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 23:56:00.564402 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 23:56:00.830014 containerd[2023]: time="2026-01-23T23:56:00.829851477Z" level=info msg="StopPodSandbox for \"949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191\"" Jan 23 23:56:01.138106 kubelet[3333]: I0123 23:56:01.137583 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-hv49q" podStartSLOduration=1.9989212840000001 podStartE2EDuration="21.137530291s" podCreationTimestamp="2026-01-23 23:55:40 +0000 UTC" firstStartedPulling="2026-01-23 23:55:40.861614606 +0000 UTC m=+34.602258977" lastFinishedPulling="2026-01-23 23:56:00.000223601 +0000 UTC m=+53.740867984" observedRunningTime="2026-01-23 23:56:01.134370355 +0000 UTC m=+54.875014738" watchObservedRunningTime="2026-01-23 23:56:01.137530291 +0000 UTC m=+54.878174674" Jan 23 23:56:01.193532 systemd[1]: Started sshd@7-172.31.18.95:22-4.153.228.146:51606.service - OpenSSH per-connection server daemon (4.153.228.146:51606). Jan 23 23:56:01.377475 containerd[2023]: 2026-01-23 23:56:01.178 [INFO][4578] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" Jan 23 23:56:01.377475 containerd[2023]: 2026-01-23 23:56:01.178 [INFO][4578] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" iface="eth0" netns="/var/run/netns/cni-ce292d67-0437-d736-54db-2c9e18c0e64f" Jan 23 23:56:01.377475 containerd[2023]: 2026-01-23 23:56:01.179 [INFO][4578] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" iface="eth0" netns="/var/run/netns/cni-ce292d67-0437-d736-54db-2c9e18c0e64f" Jan 23 23:56:01.377475 containerd[2023]: 2026-01-23 23:56:01.180 [INFO][4578] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" iface="eth0" netns="/var/run/netns/cni-ce292d67-0437-d736-54db-2c9e18c0e64f" Jan 23 23:56:01.377475 containerd[2023]: 2026-01-23 23:56:01.180 [INFO][4578] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" Jan 23 23:56:01.377475 containerd[2023]: 2026-01-23 23:56:01.180 [INFO][4578] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" Jan 23 23:56:01.377475 containerd[2023]: 2026-01-23 23:56:01.336 [INFO][4603] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" HandleID="k8s-pod-network.949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" Workload="ip--172--31--18--95-k8s-whisker--675d86896d--jvm4v-eth0" Jan 23 23:56:01.377475 containerd[2023]: 2026-01-23 23:56:01.336 [INFO][4603] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:01.377475 containerd[2023]: 2026-01-23 23:56:01.337 [INFO][4603] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:01.377475 containerd[2023]: 2026-01-23 23:56:01.354 [WARNING][4603] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" HandleID="k8s-pod-network.949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" Workload="ip--172--31--18--95-k8s-whisker--675d86896d--jvm4v-eth0" Jan 23 23:56:01.377475 containerd[2023]: 2026-01-23 23:56:01.354 [INFO][4603] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" HandleID="k8s-pod-network.949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" Workload="ip--172--31--18--95-k8s-whisker--675d86896d--jvm4v-eth0" Jan 23 23:56:01.377475 containerd[2023]: 2026-01-23 23:56:01.358 [INFO][4603] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:01.377475 containerd[2023]: 2026-01-23 23:56:01.372 [INFO][4578] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" Jan 23 23:56:01.383299 containerd[2023]: time="2026-01-23T23:56:01.380012576Z" level=info msg="TearDown network for sandbox \"949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191\" successfully" Jan 23 23:56:01.383299 containerd[2023]: time="2026-01-23T23:56:01.380058644Z" level=info msg="StopPodSandbox for \"949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191\" returns successfully" Jan 23 23:56:01.388014 systemd[1]: run-netns-cni\x2dce292d67\x2d0437\x2dd736\x2d54db\x2d2c9e18c0e64f.mount: Deactivated successfully. Jan 23 23:56:01.478325 kubelet[3333]: I0123 23:56:01.477518 3333 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/57f38bb4-b4b4-4bf6-8661-2d992d293396-whisker-backend-key-pair\") pod \"57f38bb4-b4b4-4bf6-8661-2d992d293396\" (UID: \"57f38bb4-b4b4-4bf6-8661-2d992d293396\") " Jan 23 23:56:01.478325 kubelet[3333]: I0123 23:56:01.477629 3333 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57f38bb4-b4b4-4bf6-8661-2d992d293396-whisker-ca-bundle\") pod \"57f38bb4-b4b4-4bf6-8661-2d992d293396\" (UID: \"57f38bb4-b4b4-4bf6-8661-2d992d293396\") " Jan 23 23:56:01.478325 kubelet[3333]: I0123 23:56:01.477672 3333 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szhs5\" (UniqueName: \"kubernetes.io/projected/57f38bb4-b4b4-4bf6-8661-2d992d293396-kube-api-access-szhs5\") pod \"57f38bb4-b4b4-4bf6-8661-2d992d293396\" (UID: \"57f38bb4-b4b4-4bf6-8661-2d992d293396\") " Jan 23 23:56:01.493274 kubelet[3333]: I0123 23:56:01.490828 3333 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/57f38bb4-b4b4-4bf6-8661-2d992d293396-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "57f38bb4-b4b4-4bf6-8661-2d992d293396" (UID: "57f38bb4-b4b4-4bf6-8661-2d992d293396"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 23:56:01.499243 kubelet[3333]: I0123 23:56:01.499184 3333 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/57f38bb4-b4b4-4bf6-8661-2d992d293396-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "57f38bb4-b4b4-4bf6-8661-2d992d293396" (UID: "57f38bb4-b4b4-4bf6-8661-2d992d293396"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 23:56:01.500435 kubelet[3333]: I0123 23:56:01.500269 3333 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/57f38bb4-b4b4-4bf6-8661-2d992d293396-kube-api-access-szhs5" (OuterVolumeSpecName: "kube-api-access-szhs5") pod "57f38bb4-b4b4-4bf6-8661-2d992d293396" (UID: "57f38bb4-b4b4-4bf6-8661-2d992d293396"). InnerVolumeSpecName "kube-api-access-szhs5". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 23:56:01.506263 systemd[1]: var-lib-kubelet-pods-57f38bb4\x2db4b4\x2d4bf6\x2d8661\x2d2d992d293396-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dszhs5.mount: Deactivated successfully. Jan 23 23:56:01.506482 systemd[1]: var-lib-kubelet-pods-57f38bb4\x2db4b4\x2d4bf6\x2d8661\x2d2d992d293396-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 23:56:01.578543 kubelet[3333]: I0123 23:56:01.578418 3333 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/57f38bb4-b4b4-4bf6-8661-2d992d293396-whisker-ca-bundle\") on node \"ip-172-31-18-95\" DevicePath \"\"" Jan 23 23:56:01.578543 kubelet[3333]: I0123 23:56:01.578469 3333 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-szhs5\" (UniqueName: \"kubernetes.io/projected/57f38bb4-b4b4-4bf6-8661-2d992d293396-kube-api-access-szhs5\") on node \"ip-172-31-18-95\" DevicePath \"\"" Jan 23 23:56:01.578543 kubelet[3333]: I0123 23:56:01.578493 3333 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/57f38bb4-b4b4-4bf6-8661-2d992d293396-whisker-backend-key-pair\") on node \"ip-172-31-18-95\" DevicePath \"\"" Jan 23 23:56:01.753311 sshd[4604]: Accepted publickey for core from 4.153.228.146 port 51606 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:01.757565 sshd[4604]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:01.772176 systemd-logind[1997]: New session 8 of user core. Jan 23 23:56:01.780190 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 23:56:02.020640 systemd[1]: Removed slice kubepods-besteffort-pod57f38bb4_b4b4_4bf6_8661_2d992d293396.slice - libcontainer container kubepods-besteffort-pod57f38bb4_b4b4_4bf6_8661_2d992d293396.slice. Jan 23 23:56:02.186782 systemd[1]: Created slice kubepods-besteffort-pod1e83b674_bf5a_4da7_960a_435a24e8e6d1.slice - libcontainer container kubepods-besteffort-pod1e83b674_bf5a_4da7_960a_435a24e8e6d1.slice. Jan 23 23:56:02.286102 kubelet[3333]: I0123 23:56:02.285345 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1e83b674-bf5a-4da7-960a-435a24e8e6d1-whisker-ca-bundle\") pod \"whisker-5757ddb5fd-l52x5\" (UID: \"1e83b674-bf5a-4da7-960a-435a24e8e6d1\") " pod="calico-system/whisker-5757ddb5fd-l52x5" Jan 23 23:56:02.286102 kubelet[3333]: I0123 23:56:02.285443 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ppjkz\" (UniqueName: \"kubernetes.io/projected/1e83b674-bf5a-4da7-960a-435a24e8e6d1-kube-api-access-ppjkz\") pod \"whisker-5757ddb5fd-l52x5\" (UID: \"1e83b674-bf5a-4da7-960a-435a24e8e6d1\") " pod="calico-system/whisker-5757ddb5fd-l52x5" Jan 23 23:56:02.286102 kubelet[3333]: I0123 23:56:02.285503 3333 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1e83b674-bf5a-4da7-960a-435a24e8e6d1-whisker-backend-key-pair\") pod \"whisker-5757ddb5fd-l52x5\" (UID: \"1e83b674-bf5a-4da7-960a-435a24e8e6d1\") " pod="calico-system/whisker-5757ddb5fd-l52x5" Jan 23 23:56:02.433146 sshd[4604]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:02.441031 systemd[1]: sshd@7-172.31.18.95:22-4.153.228.146:51606.service: Deactivated successfully. Jan 23 23:56:02.446371 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 23:56:02.450652 systemd-logind[1997]: Session 8 logged out. Waiting for processes to exit. Jan 23 23:56:02.453080 systemd-logind[1997]: Removed session 8. Jan 23 23:56:02.507188 containerd[2023]: time="2026-01-23T23:56:02.507099118Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5757ddb5fd-l52x5,Uid:1e83b674-bf5a-4da7-960a-435a24e8e6d1,Namespace:calico-system,Attempt:0,}" Jan 23 23:56:02.529279 kubelet[3333]: I0123 23:56:02.526923 3333 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="57f38bb4-b4b4-4bf6-8661-2d992d293396" path="/var/lib/kubelet/pods/57f38bb4-b4b4-4bf6-8661-2d992d293396/volumes" Jan 23 23:56:02.735748 systemd-networkd[1935]: califc277d7c3be: Link UP Jan 23 23:56:02.738110 (udev-worker)[4557]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:56:02.738638 systemd-networkd[1935]: califc277d7c3be: Gained carrier Jan 23 23:56:02.804173 containerd[2023]: 2026-01-23 23:56:02.579 [INFO][4672] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 23:56:02.804173 containerd[2023]: 2026-01-23 23:56:02.607 [INFO][4672] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--95-k8s-whisker--5757ddb5fd--l52x5-eth0 whisker-5757ddb5fd- calico-system 1e83b674-bf5a-4da7-960a-435a24e8e6d1 997 0 2026-01-23 23:56:02 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:5757ddb5fd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-18-95 whisker-5757ddb5fd-l52x5 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] califc277d7c3be [] [] }} ContainerID="671603911dcea966ba228fa73a9e50b98cca2ac3150dd2217e7a03c50447ef11" Namespace="calico-system" Pod="whisker-5757ddb5fd-l52x5" WorkloadEndpoint="ip--172--31--18--95-k8s-whisker--5757ddb5fd--l52x5-" Jan 23 23:56:02.804173 containerd[2023]: 2026-01-23 23:56:02.607 [INFO][4672] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="671603911dcea966ba228fa73a9e50b98cca2ac3150dd2217e7a03c50447ef11" Namespace="calico-system" Pod="whisker-5757ddb5fd-l52x5" WorkloadEndpoint="ip--172--31--18--95-k8s-whisker--5757ddb5fd--l52x5-eth0" Jan 23 23:56:02.804173 containerd[2023]: 2026-01-23 23:56:02.655 [INFO][4684] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="671603911dcea966ba228fa73a9e50b98cca2ac3150dd2217e7a03c50447ef11" HandleID="k8s-pod-network.671603911dcea966ba228fa73a9e50b98cca2ac3150dd2217e7a03c50447ef11" Workload="ip--172--31--18--95-k8s-whisker--5757ddb5fd--l52x5-eth0" Jan 23 23:56:02.804173 containerd[2023]: 2026-01-23 23:56:02.656 [INFO][4684] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="671603911dcea966ba228fa73a9e50b98cca2ac3150dd2217e7a03c50447ef11" HandleID="k8s-pod-network.671603911dcea966ba228fa73a9e50b98cca2ac3150dd2217e7a03c50447ef11" Workload="ip--172--31--18--95-k8s-whisker--5757ddb5fd--l52x5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-95", "pod":"whisker-5757ddb5fd-l52x5", "timestamp":"2026-01-23 23:56:02.655795258 +0000 UTC"}, Hostname:"ip-172-31-18-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:56:02.804173 containerd[2023]: 2026-01-23 23:56:02.656 [INFO][4684] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:02.804173 containerd[2023]: 2026-01-23 23:56:02.656 [INFO][4684] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:02.804173 containerd[2023]: 2026-01-23 23:56:02.656 [INFO][4684] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-95' Jan 23 23:56:02.804173 containerd[2023]: 2026-01-23 23:56:02.670 [INFO][4684] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.671603911dcea966ba228fa73a9e50b98cca2ac3150dd2217e7a03c50447ef11" host="ip-172-31-18-95" Jan 23 23:56:02.804173 containerd[2023]: 2026-01-23 23:56:02.678 [INFO][4684] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-95" Jan 23 23:56:02.804173 containerd[2023]: 2026-01-23 23:56:02.685 [INFO][4684] ipam/ipam.go 511: Trying affinity for 192.168.125.128/26 host="ip-172-31-18-95" Jan 23 23:56:02.804173 containerd[2023]: 2026-01-23 23:56:02.691 [INFO][4684] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.128/26 host="ip-172-31-18-95" Jan 23 23:56:02.804173 containerd[2023]: 2026-01-23 23:56:02.695 [INFO][4684] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="ip-172-31-18-95" Jan 23 23:56:02.804173 containerd[2023]: 2026-01-23 23:56:02.696 [INFO][4684] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.671603911dcea966ba228fa73a9e50b98cca2ac3150dd2217e7a03c50447ef11" host="ip-172-31-18-95" Jan 23 23:56:02.804173 containerd[2023]: 2026-01-23 23:56:02.698 [INFO][4684] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.671603911dcea966ba228fa73a9e50b98cca2ac3150dd2217e7a03c50447ef11 Jan 23 23:56:02.804173 containerd[2023]: 2026-01-23 23:56:02.706 [INFO][4684] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.671603911dcea966ba228fa73a9e50b98cca2ac3150dd2217e7a03c50447ef11" host="ip-172-31-18-95" Jan 23 23:56:02.804173 containerd[2023]: 2026-01-23 23:56:02.715 [INFO][4684] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.125.129/26] block=192.168.125.128/26 handle="k8s-pod-network.671603911dcea966ba228fa73a9e50b98cca2ac3150dd2217e7a03c50447ef11" host="ip-172-31-18-95" Jan 23 23:56:02.804173 containerd[2023]: 2026-01-23 23:56:02.715 [INFO][4684] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.129/26] handle="k8s-pod-network.671603911dcea966ba228fa73a9e50b98cca2ac3150dd2217e7a03c50447ef11" host="ip-172-31-18-95" Jan 23 23:56:02.804173 containerd[2023]: 2026-01-23 23:56:02.715 [INFO][4684] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:02.804173 containerd[2023]: 2026-01-23 23:56:02.715 [INFO][4684] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.125.129/26] IPv6=[] ContainerID="671603911dcea966ba228fa73a9e50b98cca2ac3150dd2217e7a03c50447ef11" HandleID="k8s-pod-network.671603911dcea966ba228fa73a9e50b98cca2ac3150dd2217e7a03c50447ef11" Workload="ip--172--31--18--95-k8s-whisker--5757ddb5fd--l52x5-eth0" Jan 23 23:56:02.805324 containerd[2023]: 2026-01-23 23:56:02.719 [INFO][4672] cni-plugin/k8s.go 418: Populated endpoint ContainerID="671603911dcea966ba228fa73a9e50b98cca2ac3150dd2217e7a03c50447ef11" Namespace="calico-system" Pod="whisker-5757ddb5fd-l52x5" WorkloadEndpoint="ip--172--31--18--95-k8s-whisker--5757ddb5fd--l52x5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--95-k8s-whisker--5757ddb5fd--l52x5-eth0", GenerateName:"whisker-5757ddb5fd-", Namespace:"calico-system", SelfLink:"", UID:"1e83b674-bf5a-4da7-960a-435a24e8e6d1", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5757ddb5fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-95", ContainerID:"", Pod:"whisker-5757ddb5fd-l52x5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.125.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califc277d7c3be", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:02.805324 containerd[2023]: 2026-01-23 23:56:02.719 [INFO][4672] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.129/32] ContainerID="671603911dcea966ba228fa73a9e50b98cca2ac3150dd2217e7a03c50447ef11" Namespace="calico-system" Pod="whisker-5757ddb5fd-l52x5" WorkloadEndpoint="ip--172--31--18--95-k8s-whisker--5757ddb5fd--l52x5-eth0" Jan 23 23:56:02.805324 containerd[2023]: 2026-01-23 23:56:02.719 [INFO][4672] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califc277d7c3be ContainerID="671603911dcea966ba228fa73a9e50b98cca2ac3150dd2217e7a03c50447ef11" Namespace="calico-system" Pod="whisker-5757ddb5fd-l52x5" WorkloadEndpoint="ip--172--31--18--95-k8s-whisker--5757ddb5fd--l52x5-eth0" Jan 23 23:56:02.805324 containerd[2023]: 2026-01-23 23:56:02.739 [INFO][4672] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="671603911dcea966ba228fa73a9e50b98cca2ac3150dd2217e7a03c50447ef11" Namespace="calico-system" Pod="whisker-5757ddb5fd-l52x5" WorkloadEndpoint="ip--172--31--18--95-k8s-whisker--5757ddb5fd--l52x5-eth0" Jan 23 23:56:02.805324 containerd[2023]: 2026-01-23 23:56:02.740 [INFO][4672] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="671603911dcea966ba228fa73a9e50b98cca2ac3150dd2217e7a03c50447ef11" Namespace="calico-system" Pod="whisker-5757ddb5fd-l52x5" WorkloadEndpoint="ip--172--31--18--95-k8s-whisker--5757ddb5fd--l52x5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--95-k8s-whisker--5757ddb5fd--l52x5-eth0", GenerateName:"whisker-5757ddb5fd-", Namespace:"calico-system", SelfLink:"", UID:"1e83b674-bf5a-4da7-960a-435a24e8e6d1", ResourceVersion:"997", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 56, 2, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"5757ddb5fd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-95", ContainerID:"671603911dcea966ba228fa73a9e50b98cca2ac3150dd2217e7a03c50447ef11", Pod:"whisker-5757ddb5fd-l52x5", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.125.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"califc277d7c3be", MAC:"2e:32:cf:2e:3e:12", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:02.805324 containerd[2023]: 2026-01-23 23:56:02.797 [INFO][4672] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="671603911dcea966ba228fa73a9e50b98cca2ac3150dd2217e7a03c50447ef11" Namespace="calico-system" Pod="whisker-5757ddb5fd-l52x5" WorkloadEndpoint="ip--172--31--18--95-k8s-whisker--5757ddb5fd--l52x5-eth0" Jan 23 23:56:02.866292 containerd[2023]: time="2026-01-23T23:56:02.865158383Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:02.866292 containerd[2023]: time="2026-01-23T23:56:02.865290071Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:02.866292 containerd[2023]: time="2026-01-23T23:56:02.866241719Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:02.866848 containerd[2023]: time="2026-01-23T23:56:02.866742155Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:02.940359 systemd[1]: Started cri-containerd-671603911dcea966ba228fa73a9e50b98cca2ac3150dd2217e7a03c50447ef11.scope - libcontainer container 671603911dcea966ba228fa73a9e50b98cca2ac3150dd2217e7a03c50447ef11. Jan 23 23:56:03.070836 containerd[2023]: time="2026-01-23T23:56:03.070473320Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-5757ddb5fd-l52x5,Uid:1e83b674-bf5a-4da7-960a-435a24e8e6d1,Namespace:calico-system,Attempt:0,} returns sandbox id \"671603911dcea966ba228fa73a9e50b98cca2ac3150dd2217e7a03c50447ef11\"" Jan 23 23:56:03.082251 containerd[2023]: time="2026-01-23T23:56:03.082192832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 23:56:03.387363 containerd[2023]: time="2026-01-23T23:56:03.387195214Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:03.391226 containerd[2023]: time="2026-01-23T23:56:03.391131670Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 23:56:03.391834 containerd[2023]: time="2026-01-23T23:56:03.391312654Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 23:56:03.391961 kubelet[3333]: E0123 23:56:03.391729 3333 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:56:03.391961 kubelet[3333]: E0123 23:56:03.391808 3333 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:56:03.392562 kubelet[3333]: E0123 23:56:03.391983 3333 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5757ddb5fd-l52x5_calico-system(1e83b674-bf5a-4da7-960a-435a24e8e6d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:03.395494 containerd[2023]: time="2026-01-23T23:56:03.395430454Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 23:56:03.522776 containerd[2023]: time="2026-01-23T23:56:03.522646067Z" level=info msg="StopPodSandbox for \"4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c\"" Jan 23 23:56:03.686141 containerd[2023]: time="2026-01-23T23:56:03.685967207Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:03.710356 containerd[2023]: time="2026-01-23T23:56:03.710195100Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 23:56:03.710356 containerd[2023]: time="2026-01-23T23:56:03.710287680Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 23:56:03.710698 kubelet[3333]: E0123 23:56:03.710527 3333 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:56:03.710698 kubelet[3333]: E0123 23:56:03.710586 3333 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:56:03.710698 kubelet[3333]: E0123 23:56:03.710685 3333 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5757ddb5fd-l52x5_calico-system(1e83b674-bf5a-4da7-960a-435a24e8e6d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:03.710961 kubelet[3333]: E0123 23:56:03.710752 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5757ddb5fd-l52x5" podUID="1e83b674-bf5a-4da7-960a-435a24e8e6d1" Jan 23 23:56:03.768086 containerd[2023]: 2026-01-23 23:56:03.677 [INFO][4837] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" Jan 23 23:56:03.768086 containerd[2023]: 2026-01-23 23:56:03.677 [INFO][4837] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" iface="eth0" netns="/var/run/netns/cni-cc0487d3-cbf4-9ff5-ef8b-8f55afc50ae7" Jan 23 23:56:03.768086 containerd[2023]: 2026-01-23 23:56:03.678 [INFO][4837] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" iface="eth0" netns="/var/run/netns/cni-cc0487d3-cbf4-9ff5-ef8b-8f55afc50ae7" Jan 23 23:56:03.768086 containerd[2023]: 2026-01-23 23:56:03.678 [INFO][4837] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" iface="eth0" netns="/var/run/netns/cni-cc0487d3-cbf4-9ff5-ef8b-8f55afc50ae7" Jan 23 23:56:03.768086 containerd[2023]: 2026-01-23 23:56:03.678 [INFO][4837] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" Jan 23 23:56:03.768086 containerd[2023]: 2026-01-23 23:56:03.678 [INFO][4837] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" Jan 23 23:56:03.768086 containerd[2023]: 2026-01-23 23:56:03.740 [INFO][4846] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" HandleID="k8s-pod-network.4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" Workload="ip--172--31--18--95-k8s-coredns--66bc5c9577--5wd66-eth0" Jan 23 23:56:03.768086 containerd[2023]: 2026-01-23 23:56:03.741 [INFO][4846] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:03.768086 containerd[2023]: 2026-01-23 23:56:03.741 [INFO][4846] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:03.768086 containerd[2023]: 2026-01-23 23:56:03.757 [WARNING][4846] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" HandleID="k8s-pod-network.4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" Workload="ip--172--31--18--95-k8s-coredns--66bc5c9577--5wd66-eth0" Jan 23 23:56:03.768086 containerd[2023]: 2026-01-23 23:56:03.758 [INFO][4846] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" HandleID="k8s-pod-network.4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" Workload="ip--172--31--18--95-k8s-coredns--66bc5c9577--5wd66-eth0" Jan 23 23:56:03.768086 containerd[2023]: 2026-01-23 23:56:03.760 [INFO][4846] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:03.768086 containerd[2023]: 2026-01-23 23:56:03.764 [INFO][4837] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" Jan 23 23:56:03.771165 containerd[2023]: time="2026-01-23T23:56:03.770110560Z" level=info msg="TearDown network for sandbox \"4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c\" successfully" Jan 23 23:56:03.771165 containerd[2023]: time="2026-01-23T23:56:03.770157096Z" level=info msg="StopPodSandbox for \"4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c\" returns successfully" Jan 23 23:56:03.780956 containerd[2023]: time="2026-01-23T23:56:03.780201384Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5wd66,Uid:e585a60e-07e9-4e0a-95ee-73be5aa0422a,Namespace:kube-system,Attempt:1,}" Jan 23 23:56:03.782413 systemd[1]: run-netns-cni\x2dcc0487d3\x2dcbf4\x2d9ff5\x2def8b\x2d8f55afc50ae7.mount: Deactivated successfully. Jan 23 23:56:04.014197 kubelet[3333]: E0123 23:56:04.013317 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5757ddb5fd-l52x5" podUID="1e83b674-bf5a-4da7-960a-435a24e8e6d1" Jan 23 23:56:04.166841 systemd-networkd[1935]: cali2da03103d86: Link UP Jan 23 23:56:04.171912 systemd-networkd[1935]: cali2da03103d86: Gained carrier Jan 23 23:56:04.176447 (udev-worker)[4556]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:56:04.218257 containerd[2023]: 2026-01-23 23:56:03.920 [INFO][4853] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 23:56:04.218257 containerd[2023]: 2026-01-23 23:56:03.957 [INFO][4853] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--95-k8s-coredns--66bc5c9577--5wd66-eth0 coredns-66bc5c9577- kube-system e585a60e-07e9-4e0a-95ee-73be5aa0422a 1020 0 2026-01-23 23:55:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-95 coredns-66bc5c9577-5wd66 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali2da03103d86 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="994f66d0ed7f22508cd03cb4fefde728266a8c1ff99aaeba0e8153a097e9c877" Namespace="kube-system" Pod="coredns-66bc5c9577-5wd66" WorkloadEndpoint="ip--172--31--18--95-k8s-coredns--66bc5c9577--5wd66-" Jan 23 23:56:04.218257 containerd[2023]: 2026-01-23 23:56:03.957 [INFO][4853] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="994f66d0ed7f22508cd03cb4fefde728266a8c1ff99aaeba0e8153a097e9c877" Namespace="kube-system" Pod="coredns-66bc5c9577-5wd66" WorkloadEndpoint="ip--172--31--18--95-k8s-coredns--66bc5c9577--5wd66-eth0" Jan 23 23:56:04.218257 containerd[2023]: 2026-01-23 23:56:04.050 [INFO][4871] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="994f66d0ed7f22508cd03cb4fefde728266a8c1ff99aaeba0e8153a097e9c877" HandleID="k8s-pod-network.994f66d0ed7f22508cd03cb4fefde728266a8c1ff99aaeba0e8153a097e9c877" Workload="ip--172--31--18--95-k8s-coredns--66bc5c9577--5wd66-eth0" Jan 23 23:56:04.218257 containerd[2023]: 2026-01-23 23:56:04.051 [INFO][4871] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="994f66d0ed7f22508cd03cb4fefde728266a8c1ff99aaeba0e8153a097e9c877" HandleID="k8s-pod-network.994f66d0ed7f22508cd03cb4fefde728266a8c1ff99aaeba0e8153a097e9c877" Workload="ip--172--31--18--95-k8s-coredns--66bc5c9577--5wd66-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400032ce10), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-95", "pod":"coredns-66bc5c9577-5wd66", "timestamp":"2026-01-23 23:56:04.050352525 +0000 UTC"}, Hostname:"ip-172-31-18-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:56:04.218257 containerd[2023]: 2026-01-23 23:56:04.051 [INFO][4871] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:04.218257 containerd[2023]: 2026-01-23 23:56:04.051 [INFO][4871] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:04.218257 containerd[2023]: 2026-01-23 23:56:04.051 [INFO][4871] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-95' Jan 23 23:56:04.218257 containerd[2023]: 2026-01-23 23:56:04.092 [INFO][4871] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.994f66d0ed7f22508cd03cb4fefde728266a8c1ff99aaeba0e8153a097e9c877" host="ip-172-31-18-95" Jan 23 23:56:04.218257 containerd[2023]: 2026-01-23 23:56:04.100 [INFO][4871] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-95" Jan 23 23:56:04.218257 containerd[2023]: 2026-01-23 23:56:04.121 [INFO][4871] ipam/ipam.go 511: Trying affinity for 192.168.125.128/26 host="ip-172-31-18-95" Jan 23 23:56:04.218257 containerd[2023]: 2026-01-23 23:56:04.124 [INFO][4871] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.128/26 host="ip-172-31-18-95" Jan 23 23:56:04.218257 containerd[2023]: 2026-01-23 23:56:04.128 [INFO][4871] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="ip-172-31-18-95" Jan 23 23:56:04.218257 containerd[2023]: 2026-01-23 23:56:04.128 [INFO][4871] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.994f66d0ed7f22508cd03cb4fefde728266a8c1ff99aaeba0e8153a097e9c877" host="ip-172-31-18-95" Jan 23 23:56:04.218257 containerd[2023]: 2026-01-23 23:56:04.131 [INFO][4871] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.994f66d0ed7f22508cd03cb4fefde728266a8c1ff99aaeba0e8153a097e9c877 Jan 23 23:56:04.218257 containerd[2023]: 2026-01-23 23:56:04.139 [INFO][4871] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.994f66d0ed7f22508cd03cb4fefde728266a8c1ff99aaeba0e8153a097e9c877" host="ip-172-31-18-95" Jan 23 23:56:04.218257 containerd[2023]: 2026-01-23 23:56:04.152 [INFO][4871] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.125.130/26] block=192.168.125.128/26 handle="k8s-pod-network.994f66d0ed7f22508cd03cb4fefde728266a8c1ff99aaeba0e8153a097e9c877" host="ip-172-31-18-95" Jan 23 23:56:04.218257 containerd[2023]: 2026-01-23 23:56:04.152 [INFO][4871] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.130/26] handle="k8s-pod-network.994f66d0ed7f22508cd03cb4fefde728266a8c1ff99aaeba0e8153a097e9c877" host="ip-172-31-18-95" Jan 23 23:56:04.218257 containerd[2023]: 2026-01-23 23:56:04.152 [INFO][4871] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:04.218257 containerd[2023]: 2026-01-23 23:56:04.152 [INFO][4871] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.125.130/26] IPv6=[] ContainerID="994f66d0ed7f22508cd03cb4fefde728266a8c1ff99aaeba0e8153a097e9c877" HandleID="k8s-pod-network.994f66d0ed7f22508cd03cb4fefde728266a8c1ff99aaeba0e8153a097e9c877" Workload="ip--172--31--18--95-k8s-coredns--66bc5c9577--5wd66-eth0" Jan 23 23:56:04.220773 containerd[2023]: 2026-01-23 23:56:04.157 [INFO][4853] cni-plugin/k8s.go 418: Populated endpoint ContainerID="994f66d0ed7f22508cd03cb4fefde728266a8c1ff99aaeba0e8153a097e9c877" Namespace="kube-system" Pod="coredns-66bc5c9577-5wd66" WorkloadEndpoint="ip--172--31--18--95-k8s-coredns--66bc5c9577--5wd66-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--95-k8s-coredns--66bc5c9577--5wd66-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e585a60e-07e9-4e0a-95ee-73be5aa0422a", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-95", ContainerID:"", Pod:"coredns-66bc5c9577-5wd66", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2da03103d86", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:04.220773 containerd[2023]: 2026-01-23 23:56:04.157 [INFO][4853] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.130/32] ContainerID="994f66d0ed7f22508cd03cb4fefde728266a8c1ff99aaeba0e8153a097e9c877" Namespace="kube-system" Pod="coredns-66bc5c9577-5wd66" WorkloadEndpoint="ip--172--31--18--95-k8s-coredns--66bc5c9577--5wd66-eth0" Jan 23 23:56:04.220773 containerd[2023]: 2026-01-23 23:56:04.157 [INFO][4853] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali2da03103d86 ContainerID="994f66d0ed7f22508cd03cb4fefde728266a8c1ff99aaeba0e8153a097e9c877" Namespace="kube-system" Pod="coredns-66bc5c9577-5wd66" WorkloadEndpoint="ip--172--31--18--95-k8s-coredns--66bc5c9577--5wd66-eth0" Jan 23 23:56:04.220773 containerd[2023]: 2026-01-23 23:56:04.176 [INFO][4853] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="994f66d0ed7f22508cd03cb4fefde728266a8c1ff99aaeba0e8153a097e9c877" Namespace="kube-system" Pod="coredns-66bc5c9577-5wd66" WorkloadEndpoint="ip--172--31--18--95-k8s-coredns--66bc5c9577--5wd66-eth0" Jan 23 23:56:04.220773 containerd[2023]: 2026-01-23 23:56:04.178 [INFO][4853] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="994f66d0ed7f22508cd03cb4fefde728266a8c1ff99aaeba0e8153a097e9c877" Namespace="kube-system" Pod="coredns-66bc5c9577-5wd66" WorkloadEndpoint="ip--172--31--18--95-k8s-coredns--66bc5c9577--5wd66-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--95-k8s-coredns--66bc5c9577--5wd66-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e585a60e-07e9-4e0a-95ee-73be5aa0422a", ResourceVersion:"1020", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-95", ContainerID:"994f66d0ed7f22508cd03cb4fefde728266a8c1ff99aaeba0e8153a097e9c877", Pod:"coredns-66bc5c9577-5wd66", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2da03103d86", MAC:"4a:e5:0d:e1:43:a1", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:04.220773 containerd[2023]: 2026-01-23 23:56:04.206 [INFO][4853] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="994f66d0ed7f22508cd03cb4fefde728266a8c1ff99aaeba0e8153a097e9c877" Namespace="kube-system" Pod="coredns-66bc5c9577-5wd66" WorkloadEndpoint="ip--172--31--18--95-k8s-coredns--66bc5c9577--5wd66-eth0" Jan 23 23:56:04.276622 containerd[2023]: time="2026-01-23T23:56:04.276386134Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:04.277330 containerd[2023]: time="2026-01-23T23:56:04.276843694Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:04.277330 containerd[2023]: time="2026-01-23T23:56:04.277126054Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:04.278667 containerd[2023]: time="2026-01-23T23:56:04.277999306Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:04.350620 systemd[1]: Started cri-containerd-994f66d0ed7f22508cd03cb4fefde728266a8c1ff99aaeba0e8153a097e9c877.scope - libcontainer container 994f66d0ed7f22508cd03cb4fefde728266a8c1ff99aaeba0e8153a097e9c877. Jan 23 23:56:04.482624 containerd[2023]: time="2026-01-23T23:56:04.482076395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-5wd66,Uid:e585a60e-07e9-4e0a-95ee-73be5aa0422a,Namespace:kube-system,Attempt:1,} returns sandbox id \"994f66d0ed7f22508cd03cb4fefde728266a8c1ff99aaeba0e8153a097e9c877\"" Jan 23 23:56:04.500975 containerd[2023]: time="2026-01-23T23:56:04.500743379Z" level=info msg="CreateContainer within sandbox \"994f66d0ed7f22508cd03cb4fefde728266a8c1ff99aaeba0e8153a097e9c877\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 23:56:04.530024 containerd[2023]: time="2026-01-23T23:56:04.529122552Z" level=info msg="StopPodSandbox for \"49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382\"" Jan 23 23:56:04.564115 containerd[2023]: time="2026-01-23T23:56:04.560219220Z" level=info msg="CreateContainer within sandbox \"994f66d0ed7f22508cd03cb4fefde728266a8c1ff99aaeba0e8153a097e9c877\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7be93499079aa011fca43d2a01ddfd3bfa9a7deea5c652b95b4cee259ac774e1\"" Jan 23 23:56:04.564115 containerd[2023]: time="2026-01-23T23:56:04.562347288Z" level=info msg="StartContainer for \"7be93499079aa011fca43d2a01ddfd3bfa9a7deea5c652b95b4cee259ac774e1\"" Jan 23 23:56:04.604489 systemd-networkd[1935]: califc277d7c3be: Gained IPv6LL Jan 23 23:56:04.679954 kernel: bpftool[4984]: memfd_create() called without MFD_EXEC or MFD_NOEXEC_SEAL set Jan 23 23:56:04.692525 systemd[1]: Started cri-containerd-7be93499079aa011fca43d2a01ddfd3bfa9a7deea5c652b95b4cee259ac774e1.scope - libcontainer container 7be93499079aa011fca43d2a01ddfd3bfa9a7deea5c652b95b4cee259ac774e1. Jan 23 23:56:04.786445 containerd[2023]: time="2026-01-23T23:56:04.785460769Z" level=info msg="StartContainer for \"7be93499079aa011fca43d2a01ddfd3bfa9a7deea5c652b95b4cee259ac774e1\" returns successfully" Jan 23 23:56:04.852963 containerd[2023]: 2026-01-23 23:56:04.760 [INFO][4953] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" Jan 23 23:56:04.852963 containerd[2023]: 2026-01-23 23:56:04.764 [INFO][4953] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" iface="eth0" netns="/var/run/netns/cni-de585215-58ec-cc3b-ea75-f1a16582b357" Jan 23 23:56:04.852963 containerd[2023]: 2026-01-23 23:56:04.765 [INFO][4953] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" iface="eth0" netns="/var/run/netns/cni-de585215-58ec-cc3b-ea75-f1a16582b357" Jan 23 23:56:04.852963 containerd[2023]: 2026-01-23 23:56:04.765 [INFO][4953] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" iface="eth0" netns="/var/run/netns/cni-de585215-58ec-cc3b-ea75-f1a16582b357" Jan 23 23:56:04.852963 containerd[2023]: 2026-01-23 23:56:04.765 [INFO][4953] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" Jan 23 23:56:04.852963 containerd[2023]: 2026-01-23 23:56:04.765 [INFO][4953] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" Jan 23 23:56:04.852963 containerd[2023]: 2026-01-23 23:56:04.815 [INFO][5000] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" HandleID="k8s-pod-network.49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" Workload="ip--172--31--18--95-k8s-calico--kube--controllers--54c598b4dd--zn6ss-eth0" Jan 23 23:56:04.852963 containerd[2023]: 2026-01-23 23:56:04.815 [INFO][5000] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:04.852963 containerd[2023]: 2026-01-23 23:56:04.815 [INFO][5000] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:04.852963 containerd[2023]: 2026-01-23 23:56:04.835 [WARNING][5000] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" HandleID="k8s-pod-network.49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" Workload="ip--172--31--18--95-k8s-calico--kube--controllers--54c598b4dd--zn6ss-eth0" Jan 23 23:56:04.852963 containerd[2023]: 2026-01-23 23:56:04.835 [INFO][5000] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" HandleID="k8s-pod-network.49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" Workload="ip--172--31--18--95-k8s-calico--kube--controllers--54c598b4dd--zn6ss-eth0" Jan 23 23:56:04.852963 containerd[2023]: 2026-01-23 23:56:04.842 [INFO][5000] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:04.852963 containerd[2023]: 2026-01-23 23:56:04.846 [INFO][4953] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" Jan 23 23:56:04.852963 containerd[2023]: time="2026-01-23T23:56:04.852468901Z" level=info msg="TearDown network for sandbox \"49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382\" successfully" Jan 23 23:56:04.852963 containerd[2023]: time="2026-01-23T23:56:04.852507397Z" level=info msg="StopPodSandbox for \"49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382\" returns successfully" Jan 23 23:56:04.858860 containerd[2023]: time="2026-01-23T23:56:04.858084709Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54c598b4dd-zn6ss,Uid:32710301-53d1-443d-ade3-ac9179beb56f,Namespace:calico-system,Attempt:1,}" Jan 23 23:56:05.034805 kubelet[3333]: E0123 23:56:05.034199 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5757ddb5fd-l52x5" podUID="1e83b674-bf5a-4da7-960a-435a24e8e6d1" Jan 23 23:56:05.070753 kubelet[3333]: I0123 23:56:05.069260 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-5wd66" podStartSLOduration=52.069237046 podStartE2EDuration="52.069237046s" podCreationTimestamp="2026-01-23 23:55:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:56:05.069030478 +0000 UTC m=+58.809674873" watchObservedRunningTime="2026-01-23 23:56:05.069237046 +0000 UTC m=+58.809881501" Jan 23 23:56:05.188836 systemd-networkd[1935]: cali7688df54f08: Link UP Jan 23 23:56:05.190796 systemd-networkd[1935]: cali7688df54f08: Gained carrier Jan 23 23:56:05.241533 containerd[2023]: 2026-01-23 23:56:04.963 [INFO][5012] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--95-k8s-calico--kube--controllers--54c598b4dd--zn6ss-eth0 calico-kube-controllers-54c598b4dd- calico-system 32710301-53d1-443d-ade3-ac9179beb56f 1038 0 2026-01-23 23:55:40 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:54c598b4dd projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-18-95 calico-kube-controllers-54c598b4dd-zn6ss eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali7688df54f08 [] [] }} ContainerID="da17cc41ab5965c5fe1fda79813406f5a06c5c184a3152af2d03ed347fafcb28" Namespace="calico-system" Pod="calico-kube-controllers-54c598b4dd-zn6ss" WorkloadEndpoint="ip--172--31--18--95-k8s-calico--kube--controllers--54c598b4dd--zn6ss-" Jan 23 23:56:05.241533 containerd[2023]: 2026-01-23 23:56:04.963 [INFO][5012] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="da17cc41ab5965c5fe1fda79813406f5a06c5c184a3152af2d03ed347fafcb28" Namespace="calico-system" Pod="calico-kube-controllers-54c598b4dd-zn6ss" WorkloadEndpoint="ip--172--31--18--95-k8s-calico--kube--controllers--54c598b4dd--zn6ss-eth0" Jan 23 23:56:05.241533 containerd[2023]: 2026-01-23 23:56:05.064 [INFO][5024] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="da17cc41ab5965c5fe1fda79813406f5a06c5c184a3152af2d03ed347fafcb28" HandleID="k8s-pod-network.da17cc41ab5965c5fe1fda79813406f5a06c5c184a3152af2d03ed347fafcb28" Workload="ip--172--31--18--95-k8s-calico--kube--controllers--54c598b4dd--zn6ss-eth0" Jan 23 23:56:05.241533 containerd[2023]: 2026-01-23 23:56:05.065 [INFO][5024] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="da17cc41ab5965c5fe1fda79813406f5a06c5c184a3152af2d03ed347fafcb28" HandleID="k8s-pod-network.da17cc41ab5965c5fe1fda79813406f5a06c5c184a3152af2d03ed347fafcb28" Workload="ip--172--31--18--95-k8s-calico--kube--controllers--54c598b4dd--zn6ss-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024bad0), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-95", "pod":"calico-kube-controllers-54c598b4dd-zn6ss", "timestamp":"2026-01-23 23:56:05.064974838 +0000 UTC"}, Hostname:"ip-172-31-18-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:56:05.241533 containerd[2023]: 2026-01-23 23:56:05.065 [INFO][5024] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:05.241533 containerd[2023]: 2026-01-23 23:56:05.065 [INFO][5024] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:05.241533 containerd[2023]: 2026-01-23 23:56:05.065 [INFO][5024] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-95' Jan 23 23:56:05.241533 containerd[2023]: 2026-01-23 23:56:05.101 [INFO][5024] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.da17cc41ab5965c5fe1fda79813406f5a06c5c184a3152af2d03ed347fafcb28" host="ip-172-31-18-95" Jan 23 23:56:05.241533 containerd[2023]: 2026-01-23 23:56:05.128 [INFO][5024] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-95" Jan 23 23:56:05.241533 containerd[2023]: 2026-01-23 23:56:05.138 [INFO][5024] ipam/ipam.go 511: Trying affinity for 192.168.125.128/26 host="ip-172-31-18-95" Jan 23 23:56:05.241533 containerd[2023]: 2026-01-23 23:56:05.142 [INFO][5024] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.128/26 host="ip-172-31-18-95" Jan 23 23:56:05.241533 containerd[2023]: 2026-01-23 23:56:05.147 [INFO][5024] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="ip-172-31-18-95" Jan 23 23:56:05.241533 containerd[2023]: 2026-01-23 23:56:05.147 [INFO][5024] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.da17cc41ab5965c5fe1fda79813406f5a06c5c184a3152af2d03ed347fafcb28" host="ip-172-31-18-95" Jan 23 23:56:05.241533 containerd[2023]: 2026-01-23 23:56:05.150 [INFO][5024] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.da17cc41ab5965c5fe1fda79813406f5a06c5c184a3152af2d03ed347fafcb28 Jan 23 23:56:05.241533 containerd[2023]: 2026-01-23 23:56:05.161 [INFO][5024] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.da17cc41ab5965c5fe1fda79813406f5a06c5c184a3152af2d03ed347fafcb28" host="ip-172-31-18-95" Jan 23 23:56:05.241533 containerd[2023]: 2026-01-23 23:56:05.176 [INFO][5024] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.125.131/26] block=192.168.125.128/26 handle="k8s-pod-network.da17cc41ab5965c5fe1fda79813406f5a06c5c184a3152af2d03ed347fafcb28" host="ip-172-31-18-95" Jan 23 23:56:05.241533 containerd[2023]: 2026-01-23 23:56:05.176 [INFO][5024] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.131/26] handle="k8s-pod-network.da17cc41ab5965c5fe1fda79813406f5a06c5c184a3152af2d03ed347fafcb28" host="ip-172-31-18-95" Jan 23 23:56:05.241533 containerd[2023]: 2026-01-23 23:56:05.176 [INFO][5024] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:05.241533 containerd[2023]: 2026-01-23 23:56:05.176 [INFO][5024] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.125.131/26] IPv6=[] ContainerID="da17cc41ab5965c5fe1fda79813406f5a06c5c184a3152af2d03ed347fafcb28" HandleID="k8s-pod-network.da17cc41ab5965c5fe1fda79813406f5a06c5c184a3152af2d03ed347fafcb28" Workload="ip--172--31--18--95-k8s-calico--kube--controllers--54c598b4dd--zn6ss-eth0" Jan 23 23:56:05.242776 containerd[2023]: 2026-01-23 23:56:05.180 [INFO][5012] cni-plugin/k8s.go 418: Populated endpoint ContainerID="da17cc41ab5965c5fe1fda79813406f5a06c5c184a3152af2d03ed347fafcb28" Namespace="calico-system" Pod="calico-kube-controllers-54c598b4dd-zn6ss" WorkloadEndpoint="ip--172--31--18--95-k8s-calico--kube--controllers--54c598b4dd--zn6ss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--95-k8s-calico--kube--controllers--54c598b4dd--zn6ss-eth0", GenerateName:"calico-kube-controllers-54c598b4dd-", Namespace:"calico-system", SelfLink:"", UID:"32710301-53d1-443d-ade3-ac9179beb56f", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54c598b4dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-95", ContainerID:"", Pod:"calico-kube-controllers-54c598b4dd-zn6ss", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.125.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7688df54f08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:05.242776 containerd[2023]: 2026-01-23 23:56:05.181 [INFO][5012] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.131/32] ContainerID="da17cc41ab5965c5fe1fda79813406f5a06c5c184a3152af2d03ed347fafcb28" Namespace="calico-system" Pod="calico-kube-controllers-54c598b4dd-zn6ss" WorkloadEndpoint="ip--172--31--18--95-k8s-calico--kube--controllers--54c598b4dd--zn6ss-eth0" Jan 23 23:56:05.242776 containerd[2023]: 2026-01-23 23:56:05.181 [INFO][5012] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali7688df54f08 ContainerID="da17cc41ab5965c5fe1fda79813406f5a06c5c184a3152af2d03ed347fafcb28" Namespace="calico-system" Pod="calico-kube-controllers-54c598b4dd-zn6ss" WorkloadEndpoint="ip--172--31--18--95-k8s-calico--kube--controllers--54c598b4dd--zn6ss-eth0" Jan 23 23:56:05.242776 containerd[2023]: 2026-01-23 23:56:05.191 [INFO][5012] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="da17cc41ab5965c5fe1fda79813406f5a06c5c184a3152af2d03ed347fafcb28" Namespace="calico-system" Pod="calico-kube-controllers-54c598b4dd-zn6ss" WorkloadEndpoint="ip--172--31--18--95-k8s-calico--kube--controllers--54c598b4dd--zn6ss-eth0" Jan 23 23:56:05.242776 containerd[2023]: 2026-01-23 23:56:05.195 [INFO][5012] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="da17cc41ab5965c5fe1fda79813406f5a06c5c184a3152af2d03ed347fafcb28" Namespace="calico-system" Pod="calico-kube-controllers-54c598b4dd-zn6ss" WorkloadEndpoint="ip--172--31--18--95-k8s-calico--kube--controllers--54c598b4dd--zn6ss-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--95-k8s-calico--kube--controllers--54c598b4dd--zn6ss-eth0", GenerateName:"calico-kube-controllers-54c598b4dd-", Namespace:"calico-system", SelfLink:"", UID:"32710301-53d1-443d-ade3-ac9179beb56f", ResourceVersion:"1038", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54c598b4dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-95", ContainerID:"da17cc41ab5965c5fe1fda79813406f5a06c5c184a3152af2d03ed347fafcb28", Pod:"calico-kube-controllers-54c598b4dd-zn6ss", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.125.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7688df54f08", MAC:"d2:14:bf:cd:4e:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:05.242776 containerd[2023]: 2026-01-23 23:56:05.232 [INFO][5012] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="da17cc41ab5965c5fe1fda79813406f5a06c5c184a3152af2d03ed347fafcb28" Namespace="calico-system" Pod="calico-kube-controllers-54c598b4dd-zn6ss" WorkloadEndpoint="ip--172--31--18--95-k8s-calico--kube--controllers--54c598b4dd--zn6ss-eth0" Jan 23 23:56:05.309632 systemd[1]: run-netns-cni\x2dde585215\x2d58ec\x2dcc3b\x2dea75\x2df1a16582b357.mount: Deactivated successfully. Jan 23 23:56:05.320128 containerd[2023]: time="2026-01-23T23:56:05.319188300Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:05.320128 containerd[2023]: time="2026-01-23T23:56:05.319295328Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:05.320128 containerd[2023]: time="2026-01-23T23:56:05.319333140Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:05.320128 containerd[2023]: time="2026-01-23T23:56:05.319491252Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:05.388930 systemd[1]: Started cri-containerd-da17cc41ab5965c5fe1fda79813406f5a06c5c184a3152af2d03ed347fafcb28.scope - libcontainer container da17cc41ab5965c5fe1fda79813406f5a06c5c184a3152af2d03ed347fafcb28. Jan 23 23:56:05.524759 containerd[2023]: time="2026-01-23T23:56:05.524692609Z" level=info msg="StopPodSandbox for \"13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059\"" Jan 23 23:56:05.765265 containerd[2023]: time="2026-01-23T23:56:05.765077546Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-54c598b4dd-zn6ss,Uid:32710301-53d1-443d-ade3-ac9179beb56f,Namespace:calico-system,Attempt:1,} returns sandbox id \"da17cc41ab5965c5fe1fda79813406f5a06c5c184a3152af2d03ed347fafcb28\"" Jan 23 23:56:05.773397 containerd[2023]: time="2026-01-23T23:56:05.772727354Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 23:56:05.825821 systemd-networkd[1935]: cali2da03103d86: Gained IPv6LL Jan 23 23:56:05.867038 systemd-networkd[1935]: vxlan.calico: Link UP Jan 23 23:56:05.867059 systemd-networkd[1935]: vxlan.calico: Gained carrier Jan 23 23:56:05.873994 containerd[2023]: 2026-01-23 23:56:05.692 [INFO][5100] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" Jan 23 23:56:05.873994 containerd[2023]: 2026-01-23 23:56:05.692 [INFO][5100] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" iface="eth0" netns="/var/run/netns/cni-daca4198-8157-d769-9819-2174356e2901" Jan 23 23:56:05.873994 containerd[2023]: 2026-01-23 23:56:05.693 [INFO][5100] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" iface="eth0" netns="/var/run/netns/cni-daca4198-8157-d769-9819-2174356e2901" Jan 23 23:56:05.873994 containerd[2023]: 2026-01-23 23:56:05.693 [INFO][5100] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" iface="eth0" netns="/var/run/netns/cni-daca4198-8157-d769-9819-2174356e2901" Jan 23 23:56:05.873994 containerd[2023]: 2026-01-23 23:56:05.693 [INFO][5100] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" Jan 23 23:56:05.873994 containerd[2023]: 2026-01-23 23:56:05.693 [INFO][5100] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" Jan 23 23:56:05.873994 containerd[2023]: 2026-01-23 23:56:05.820 [INFO][5109] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" HandleID="k8s-pod-network.13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" Workload="ip--172--31--18--95-k8s-csi--node--driver--bsxtr-eth0" Jan 23 23:56:05.873994 containerd[2023]: 2026-01-23 23:56:05.821 [INFO][5109] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:05.873994 containerd[2023]: 2026-01-23 23:56:05.821 [INFO][5109] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:05.873994 containerd[2023]: 2026-01-23 23:56:05.850 [WARNING][5109] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" HandleID="k8s-pod-network.13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" Workload="ip--172--31--18--95-k8s-csi--node--driver--bsxtr-eth0" Jan 23 23:56:05.873994 containerd[2023]: 2026-01-23 23:56:05.850 [INFO][5109] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" HandleID="k8s-pod-network.13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" Workload="ip--172--31--18--95-k8s-csi--node--driver--bsxtr-eth0" Jan 23 23:56:05.873994 containerd[2023]: 2026-01-23 23:56:05.857 [INFO][5109] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:05.873994 containerd[2023]: 2026-01-23 23:56:05.864 [INFO][5100] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" Jan 23 23:56:05.875107 containerd[2023]: time="2026-01-23T23:56:05.875047082Z" level=info msg="TearDown network for sandbox \"13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059\" successfully" Jan 23 23:56:05.875231 containerd[2023]: time="2026-01-23T23:56:05.875100590Z" level=info msg="StopPodSandbox for \"13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059\" returns successfully" Jan 23 23:56:05.881340 systemd[1]: run-netns-cni\x2ddaca4198\x2d8157\x2dd769\x2d9819\x2d2174356e2901.mount: Deactivated successfully. Jan 23 23:56:05.886358 containerd[2023]: time="2026-01-23T23:56:05.886279298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bsxtr,Uid:16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0,Namespace:calico-system,Attempt:1,}" Jan 23 23:56:06.102718 containerd[2023]: time="2026-01-23T23:56:06.101528087Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:06.105244 containerd[2023]: time="2026-01-23T23:56:06.104036099Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 23:56:06.105496 containerd[2023]: time="2026-01-23T23:56:06.104122391Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 23:56:06.107926 kubelet[3333]: E0123 23:56:06.107038 3333 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:56:06.107926 kubelet[3333]: E0123 23:56:06.107125 3333 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:56:06.107926 kubelet[3333]: E0123 23:56:06.107244 3333 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-54c598b4dd-zn6ss_calico-system(32710301-53d1-443d-ade3-ac9179beb56f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:06.107926 kubelet[3333]: E0123 23:56:06.107321 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54c598b4dd-zn6ss" podUID="32710301-53d1-443d-ade3-ac9179beb56f" Jan 23 23:56:06.182091 systemd-networkd[1935]: califad62ff3841: Link UP Jan 23 23:56:06.183496 (udev-worker)[5138]: Network interface NamePolicy= disabled on kernel command line. Jan 23 23:56:06.186271 systemd-networkd[1935]: califad62ff3841: Gained carrier Jan 23 23:56:06.236984 containerd[2023]: 2026-01-23 23:56:06.016 [INFO][5140] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--95-k8s-csi--node--driver--bsxtr-eth0 csi-node-driver- calico-system 16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0 1057 0 2026-01-23 23:55:40 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-18-95 csi-node-driver-bsxtr eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] califad62ff3841 [] [] }} ContainerID="1ef7739a84dc3b864387d0ea1ec8ec9333074d69bbc4a4f25c154d9f8651c8b9" Namespace="calico-system" Pod="csi-node-driver-bsxtr" WorkloadEndpoint="ip--172--31--18--95-k8s-csi--node--driver--bsxtr-" Jan 23 23:56:06.236984 containerd[2023]: 2026-01-23 23:56:06.016 [INFO][5140] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1ef7739a84dc3b864387d0ea1ec8ec9333074d69bbc4a4f25c154d9f8651c8b9" Namespace="calico-system" Pod="csi-node-driver-bsxtr" WorkloadEndpoint="ip--172--31--18--95-k8s-csi--node--driver--bsxtr-eth0" Jan 23 23:56:06.236984 containerd[2023]: 2026-01-23 23:56:06.094 [INFO][5153] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1ef7739a84dc3b864387d0ea1ec8ec9333074d69bbc4a4f25c154d9f8651c8b9" HandleID="k8s-pod-network.1ef7739a84dc3b864387d0ea1ec8ec9333074d69bbc4a4f25c154d9f8651c8b9" Workload="ip--172--31--18--95-k8s-csi--node--driver--bsxtr-eth0" Jan 23 23:56:06.236984 containerd[2023]: 2026-01-23 23:56:06.095 [INFO][5153] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1ef7739a84dc3b864387d0ea1ec8ec9333074d69bbc4a4f25c154d9f8651c8b9" HandleID="k8s-pod-network.1ef7739a84dc3b864387d0ea1ec8ec9333074d69bbc4a4f25c154d9f8651c8b9" Workload="ip--172--31--18--95-k8s-csi--node--driver--bsxtr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3c60), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-95", "pod":"csi-node-driver-bsxtr", "timestamp":"2026-01-23 23:56:06.094824083 +0000 UTC"}, Hostname:"ip-172-31-18-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:56:06.236984 containerd[2023]: 2026-01-23 23:56:06.095 [INFO][5153] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:06.236984 containerd[2023]: 2026-01-23 23:56:06.095 [INFO][5153] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:06.236984 containerd[2023]: 2026-01-23 23:56:06.095 [INFO][5153] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-95' Jan 23 23:56:06.236984 containerd[2023]: 2026-01-23 23:56:06.111 [INFO][5153] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1ef7739a84dc3b864387d0ea1ec8ec9333074d69bbc4a4f25c154d9f8651c8b9" host="ip-172-31-18-95" Jan 23 23:56:06.236984 containerd[2023]: 2026-01-23 23:56:06.123 [INFO][5153] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-95" Jan 23 23:56:06.236984 containerd[2023]: 2026-01-23 23:56:06.134 [INFO][5153] ipam/ipam.go 511: Trying affinity for 192.168.125.128/26 host="ip-172-31-18-95" Jan 23 23:56:06.236984 containerd[2023]: 2026-01-23 23:56:06.137 [INFO][5153] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.128/26 host="ip-172-31-18-95" Jan 23 23:56:06.236984 containerd[2023]: 2026-01-23 23:56:06.144 [INFO][5153] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="ip-172-31-18-95" Jan 23 23:56:06.236984 containerd[2023]: 2026-01-23 23:56:06.145 [INFO][5153] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.1ef7739a84dc3b864387d0ea1ec8ec9333074d69bbc4a4f25c154d9f8651c8b9" host="ip-172-31-18-95" Jan 23 23:56:06.236984 containerd[2023]: 2026-01-23 23:56:06.148 [INFO][5153] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1ef7739a84dc3b864387d0ea1ec8ec9333074d69bbc4a4f25c154d9f8651c8b9 Jan 23 23:56:06.236984 containerd[2023]: 2026-01-23 23:56:06.157 [INFO][5153] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.1ef7739a84dc3b864387d0ea1ec8ec9333074d69bbc4a4f25c154d9f8651c8b9" host="ip-172-31-18-95" Jan 23 23:56:06.236984 containerd[2023]: 2026-01-23 23:56:06.166 [INFO][5153] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.125.132/26] block=192.168.125.128/26 handle="k8s-pod-network.1ef7739a84dc3b864387d0ea1ec8ec9333074d69bbc4a4f25c154d9f8651c8b9" host="ip-172-31-18-95" Jan 23 23:56:06.236984 containerd[2023]: 2026-01-23 23:56:06.166 [INFO][5153] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.132/26] handle="k8s-pod-network.1ef7739a84dc3b864387d0ea1ec8ec9333074d69bbc4a4f25c154d9f8651c8b9" host="ip-172-31-18-95" Jan 23 23:56:06.236984 containerd[2023]: 2026-01-23 23:56:06.166 [INFO][5153] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:06.236984 containerd[2023]: 2026-01-23 23:56:06.166 [INFO][5153] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.125.132/26] IPv6=[] ContainerID="1ef7739a84dc3b864387d0ea1ec8ec9333074d69bbc4a4f25c154d9f8651c8b9" HandleID="k8s-pod-network.1ef7739a84dc3b864387d0ea1ec8ec9333074d69bbc4a4f25c154d9f8651c8b9" Workload="ip--172--31--18--95-k8s-csi--node--driver--bsxtr-eth0" Jan 23 23:56:06.238547 containerd[2023]: 2026-01-23 23:56:06.171 [INFO][5140] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1ef7739a84dc3b864387d0ea1ec8ec9333074d69bbc4a4f25c154d9f8651c8b9" Namespace="calico-system" Pod="csi-node-driver-bsxtr" WorkloadEndpoint="ip--172--31--18--95-k8s-csi--node--driver--bsxtr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--95-k8s-csi--node--driver--bsxtr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-95", ContainerID:"", Pod:"csi-node-driver-bsxtr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califad62ff3841", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:06.238547 containerd[2023]: 2026-01-23 23:56:06.171 [INFO][5140] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.132/32] ContainerID="1ef7739a84dc3b864387d0ea1ec8ec9333074d69bbc4a4f25c154d9f8651c8b9" Namespace="calico-system" Pod="csi-node-driver-bsxtr" WorkloadEndpoint="ip--172--31--18--95-k8s-csi--node--driver--bsxtr-eth0" Jan 23 23:56:06.238547 containerd[2023]: 2026-01-23 23:56:06.171 [INFO][5140] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to califad62ff3841 ContainerID="1ef7739a84dc3b864387d0ea1ec8ec9333074d69bbc4a4f25c154d9f8651c8b9" Namespace="calico-system" Pod="csi-node-driver-bsxtr" WorkloadEndpoint="ip--172--31--18--95-k8s-csi--node--driver--bsxtr-eth0" Jan 23 23:56:06.238547 containerd[2023]: 2026-01-23 23:56:06.192 [INFO][5140] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1ef7739a84dc3b864387d0ea1ec8ec9333074d69bbc4a4f25c154d9f8651c8b9" Namespace="calico-system" Pod="csi-node-driver-bsxtr" WorkloadEndpoint="ip--172--31--18--95-k8s-csi--node--driver--bsxtr-eth0" Jan 23 23:56:06.238547 containerd[2023]: 2026-01-23 23:56:06.194 [INFO][5140] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1ef7739a84dc3b864387d0ea1ec8ec9333074d69bbc4a4f25c154d9f8651c8b9" Namespace="calico-system" Pod="csi-node-driver-bsxtr" WorkloadEndpoint="ip--172--31--18--95-k8s-csi--node--driver--bsxtr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--95-k8s-csi--node--driver--bsxtr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0", ResourceVersion:"1057", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-95", ContainerID:"1ef7739a84dc3b864387d0ea1ec8ec9333074d69bbc4a4f25c154d9f8651c8b9", Pod:"csi-node-driver-bsxtr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califad62ff3841", MAC:"c6:7e:4f:2c:b3:3c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:06.238547 containerd[2023]: 2026-01-23 23:56:06.222 [INFO][5140] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1ef7739a84dc3b864387d0ea1ec8ec9333074d69bbc4a4f25c154d9f8651c8b9" Namespace="calico-system" Pod="csi-node-driver-bsxtr" WorkloadEndpoint="ip--172--31--18--95-k8s-csi--node--driver--bsxtr-eth0" Jan 23 23:56:06.307936 containerd[2023]: time="2026-01-23T23:56:06.306284400Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:06.307936 containerd[2023]: time="2026-01-23T23:56:06.306661584Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:06.307936 containerd[2023]: time="2026-01-23T23:56:06.306742392Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:06.307936 containerd[2023]: time="2026-01-23T23:56:06.307210200Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:06.357215 systemd[1]: Started cri-containerd-1ef7739a84dc3b864387d0ea1ec8ec9333074d69bbc4a4f25c154d9f8651c8b9.scope - libcontainer container 1ef7739a84dc3b864387d0ea1ec8ec9333074d69bbc4a4f25c154d9f8651c8b9. Jan 23 23:56:06.427482 containerd[2023]: time="2026-01-23T23:56:06.427397929Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-bsxtr,Uid:16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0,Namespace:calico-system,Attempt:1,} returns sandbox id \"1ef7739a84dc3b864387d0ea1ec8ec9333074d69bbc4a4f25c154d9f8651c8b9\"" Jan 23 23:56:06.431316 containerd[2023]: time="2026-01-23T23:56:06.431235253Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:56:06.526423 containerd[2023]: time="2026-01-23T23:56:06.525603302Z" level=info msg="StopPodSandbox for \"b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0\"" Jan 23 23:56:06.528057 containerd[2023]: time="2026-01-23T23:56:06.527782934Z" level=info msg="StopPodSandbox for \"1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09\"" Jan 23 23:56:06.528814 containerd[2023]: time="2026-01-23T23:56:06.528747206Z" level=info msg="StopPodSandbox for \"534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485\"" Jan 23 23:56:06.563367 containerd[2023]: time="2026-01-23T23:56:06.562662818Z" level=info msg="StopPodSandbox for \"4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c\"" Jan 23 23:56:06.709100 containerd[2023]: time="2026-01-23T23:56:06.708844046Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:06.723604 containerd[2023]: time="2026-01-23T23:56:06.723432795Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:56:06.725084 containerd[2023]: time="2026-01-23T23:56:06.723770319Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:56:06.725824 kubelet[3333]: E0123 23:56:06.725374 3333 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:56:06.726105 kubelet[3333]: E0123 23:56:06.726043 3333 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:56:06.726826 kubelet[3333]: E0123 23:56:06.726390 3333 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-bsxtr_calico-system(16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:06.734988 containerd[2023]: time="2026-01-23T23:56:06.734141943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:56:07.066418 kubelet[3333]: E0123 23:56:07.066316 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54c598b4dd-zn6ss" podUID="32710301-53d1-443d-ade3-ac9179beb56f" Jan 23 23:56:07.072042 containerd[2023]: time="2026-01-23T23:56:07.071971092Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:07.084145 containerd[2023]: time="2026-01-23T23:56:07.084058260Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:56:07.084344 containerd[2023]: time="2026-01-23T23:56:07.084228612Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:56:07.085473 kubelet[3333]: E0123 23:56:07.085392 3333 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:56:07.085473 kubelet[3333]: E0123 23:56:07.085463 3333 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:56:07.085640 kubelet[3333]: E0123 23:56:07.085576 3333 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-bsxtr_calico-system(16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:07.085744 kubelet[3333]: E0123 23:56:07.085648 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bsxtr" podUID="16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0" Jan 23 23:56:07.100204 containerd[2023]: 2026-01-23 23:56:06.862 [INFO][5262] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" Jan 23 23:56:07.100204 containerd[2023]: 2026-01-23 23:56:06.862 [INFO][5262] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" iface="eth0" netns="/var/run/netns/cni-240b7bba-62b2-9b4e-0160-60c543d9733d" Jan 23 23:56:07.100204 containerd[2023]: 2026-01-23 23:56:06.866 [INFO][5262] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" iface="eth0" netns="/var/run/netns/cni-240b7bba-62b2-9b4e-0160-60c543d9733d" Jan 23 23:56:07.100204 containerd[2023]: 2026-01-23 23:56:06.868 [INFO][5262] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" iface="eth0" netns="/var/run/netns/cni-240b7bba-62b2-9b4e-0160-60c543d9733d" Jan 23 23:56:07.100204 containerd[2023]: 2026-01-23 23:56:06.871 [INFO][5262] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" Jan 23 23:56:07.100204 containerd[2023]: 2026-01-23 23:56:06.871 [INFO][5262] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" Jan 23 23:56:07.100204 containerd[2023]: 2026-01-23 23:56:07.000 [INFO][5309] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" HandleID="k8s-pod-network.1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" Workload="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--79vx7-eth0" Jan 23 23:56:07.100204 containerd[2023]: 2026-01-23 23:56:07.000 [INFO][5309] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:07.100204 containerd[2023]: 2026-01-23 23:56:07.000 [INFO][5309] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:07.100204 containerd[2023]: 2026-01-23 23:56:07.050 [WARNING][5309] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" HandleID="k8s-pod-network.1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" Workload="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--79vx7-eth0" Jan 23 23:56:07.100204 containerd[2023]: 2026-01-23 23:56:07.050 [INFO][5309] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" HandleID="k8s-pod-network.1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" Workload="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--79vx7-eth0" Jan 23 23:56:07.100204 containerd[2023]: 2026-01-23 23:56:07.058 [INFO][5309] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:07.100204 containerd[2023]: 2026-01-23 23:56:07.085 [INFO][5262] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" Jan 23 23:56:07.106452 containerd[2023]: time="2026-01-23T23:56:07.102597744Z" level=info msg="TearDown network for sandbox \"1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09\" successfully" Jan 23 23:56:07.106452 containerd[2023]: time="2026-01-23T23:56:07.102649752Z" level=info msg="StopPodSandbox for \"1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09\" returns successfully" Jan 23 23:56:07.108799 systemd[1]: run-netns-cni\x2d240b7bba\x2d62b2\x2d9b4e\x2d0160\x2d60c543d9733d.mount: Deactivated successfully. Jan 23 23:56:07.118764 containerd[2023]: time="2026-01-23T23:56:07.118521744Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58854c8f84-79vx7,Uid:3ac30c2d-dd8c-4060-a356-77e0062bc1c4,Namespace:calico-apiserver,Attempt:1,}" Jan 23 23:56:07.206420 containerd[2023]: 2026-01-23 23:56:06.937 [WARNING][5276] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--95-k8s-coredns--66bc5c9577--5wd66-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e585a60e-07e9-4e0a-95ee-73be5aa0422a", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-95", ContainerID:"994f66d0ed7f22508cd03cb4fefde728266a8c1ff99aaeba0e8153a097e9c877", Pod:"coredns-66bc5c9577-5wd66", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2da03103d86", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:07.206420 containerd[2023]: 2026-01-23 23:56:06.937 [INFO][5276] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" Jan 23 23:56:07.206420 containerd[2023]: 2026-01-23 23:56:06.937 [INFO][5276] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" iface="eth0" netns="" Jan 23 23:56:07.206420 containerd[2023]: 2026-01-23 23:56:06.937 [INFO][5276] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" Jan 23 23:56:07.206420 containerd[2023]: 2026-01-23 23:56:06.937 [INFO][5276] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" Jan 23 23:56:07.206420 containerd[2023]: 2026-01-23 23:56:07.042 [INFO][5322] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" HandleID="k8s-pod-network.4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" Workload="ip--172--31--18--95-k8s-coredns--66bc5c9577--5wd66-eth0" Jan 23 23:56:07.206420 containerd[2023]: 2026-01-23 23:56:07.042 [INFO][5322] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:07.206420 containerd[2023]: 2026-01-23 23:56:07.059 [INFO][5322] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:07.206420 containerd[2023]: 2026-01-23 23:56:07.150 [WARNING][5322] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" HandleID="k8s-pod-network.4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" Workload="ip--172--31--18--95-k8s-coredns--66bc5c9577--5wd66-eth0" Jan 23 23:56:07.206420 containerd[2023]: 2026-01-23 23:56:07.151 [INFO][5322] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" HandleID="k8s-pod-network.4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" Workload="ip--172--31--18--95-k8s-coredns--66bc5c9577--5wd66-eth0" Jan 23 23:56:07.206420 containerd[2023]: 2026-01-23 23:56:07.160 [INFO][5322] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:07.206420 containerd[2023]: 2026-01-23 23:56:07.180 [INFO][5276] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" Jan 23 23:56:07.211908 containerd[2023]: time="2026-01-23T23:56:07.210860665Z" level=info msg="TearDown network for sandbox \"4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c\" successfully" Jan 23 23:56:07.212214 containerd[2023]: time="2026-01-23T23:56:07.211669417Z" level=info msg="StopPodSandbox for \"4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c\" returns successfully" Jan 23 23:56:07.214706 containerd[2023]: time="2026-01-23T23:56:07.214457845Z" level=info msg="RemovePodSandbox for \"4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c\"" Jan 23 23:56:07.215301 containerd[2023]: time="2026-01-23T23:56:07.215262073Z" level=info msg="Forcibly stopping sandbox \"4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c\"" Jan 23 23:56:07.229072 systemd-networkd[1935]: cali7688df54f08: Gained IPv6LL Jan 23 23:56:07.255978 containerd[2023]: 2026-01-23 23:56:06.910 [INFO][5261] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" Jan 23 23:56:07.255978 containerd[2023]: 2026-01-23 23:56:06.914 [INFO][5261] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" iface="eth0" netns="/var/run/netns/cni-517ab26f-b809-01be-9927-88a03cec4496" Jan 23 23:56:07.255978 containerd[2023]: 2026-01-23 23:56:06.914 [INFO][5261] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" iface="eth0" netns="/var/run/netns/cni-517ab26f-b809-01be-9927-88a03cec4496" Jan 23 23:56:07.255978 containerd[2023]: 2026-01-23 23:56:06.917 [INFO][5261] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" iface="eth0" netns="/var/run/netns/cni-517ab26f-b809-01be-9927-88a03cec4496" Jan 23 23:56:07.255978 containerd[2023]: 2026-01-23 23:56:06.917 [INFO][5261] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" Jan 23 23:56:07.255978 containerd[2023]: 2026-01-23 23:56:06.917 [INFO][5261] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" Jan 23 23:56:07.255978 containerd[2023]: 2026-01-23 23:56:07.154 [INFO][5316] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" HandleID="k8s-pod-network.b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" Workload="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--dfgrz-eth0" Jan 23 23:56:07.255978 containerd[2023]: 2026-01-23 23:56:07.154 [INFO][5316] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:07.255978 containerd[2023]: 2026-01-23 23:56:07.167 [INFO][5316] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:07.255978 containerd[2023]: 2026-01-23 23:56:07.206 [WARNING][5316] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" HandleID="k8s-pod-network.b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" Workload="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--dfgrz-eth0" Jan 23 23:56:07.255978 containerd[2023]: 2026-01-23 23:56:07.206 [INFO][5316] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" HandleID="k8s-pod-network.b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" Workload="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--dfgrz-eth0" Jan 23 23:56:07.255978 containerd[2023]: 2026-01-23 23:56:07.212 [INFO][5316] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:07.255978 containerd[2023]: 2026-01-23 23:56:07.225 [INFO][5261] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" Jan 23 23:56:07.262719 containerd[2023]: time="2026-01-23T23:56:07.257151505Z" level=info msg="TearDown network for sandbox \"b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0\" successfully" Jan 23 23:56:07.262719 containerd[2023]: time="2026-01-23T23:56:07.257199697Z" level=info msg="StopPodSandbox for \"b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0\" returns successfully" Jan 23 23:56:07.265553 containerd[2023]: time="2026-01-23T23:56:07.265489801Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58854c8f84-dfgrz,Uid:88895574-7d47-4441-9e70-eebbca18d915,Namespace:calico-apiserver,Attempt:1,}" Jan 23 23:56:07.298425 systemd[1]: run-netns-cni\x2d517ab26f\x2db809\x2d01be\x2d9927\x2d88a03cec4496.mount: Deactivated successfully. Jan 23 23:56:07.346848 containerd[2023]: 2026-01-23 23:56:06.945 [INFO][5264] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" Jan 23 23:56:07.346848 containerd[2023]: 2026-01-23 23:56:06.949 [INFO][5264] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" iface="eth0" netns="/var/run/netns/cni-bebaa8d7-1c0a-8601-60d0-8b221fff2515" Jan 23 23:56:07.346848 containerd[2023]: 2026-01-23 23:56:06.953 [INFO][5264] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" iface="eth0" netns="/var/run/netns/cni-bebaa8d7-1c0a-8601-60d0-8b221fff2515" Jan 23 23:56:07.346848 containerd[2023]: 2026-01-23 23:56:06.957 [INFO][5264] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" iface="eth0" netns="/var/run/netns/cni-bebaa8d7-1c0a-8601-60d0-8b221fff2515" Jan 23 23:56:07.346848 containerd[2023]: 2026-01-23 23:56:06.957 [INFO][5264] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" Jan 23 23:56:07.346848 containerd[2023]: 2026-01-23 23:56:06.957 [INFO][5264] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" Jan 23 23:56:07.346848 containerd[2023]: 2026-01-23 23:56:07.243 [INFO][5328] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" HandleID="k8s-pod-network.534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" Workload="ip--172--31--18--95-k8s-goldmane--7c778bb748--zhdzf-eth0" Jan 23 23:56:07.346848 containerd[2023]: 2026-01-23 23:56:07.245 [INFO][5328] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:07.346848 containerd[2023]: 2026-01-23 23:56:07.247 [INFO][5328] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:07.346848 containerd[2023]: 2026-01-23 23:56:07.297 [WARNING][5328] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" HandleID="k8s-pod-network.534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" Workload="ip--172--31--18--95-k8s-goldmane--7c778bb748--zhdzf-eth0" Jan 23 23:56:07.346848 containerd[2023]: 2026-01-23 23:56:07.297 [INFO][5328] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" HandleID="k8s-pod-network.534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" Workload="ip--172--31--18--95-k8s-goldmane--7c778bb748--zhdzf-eth0" Jan 23 23:56:07.346848 containerd[2023]: 2026-01-23 23:56:07.309 [INFO][5328] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:07.346848 containerd[2023]: 2026-01-23 23:56:07.338 [INFO][5264] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" Jan 23 23:56:07.355510 containerd[2023]: time="2026-01-23T23:56:07.355111574Z" level=info msg="TearDown network for sandbox \"534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485\" successfully" Jan 23 23:56:07.355510 containerd[2023]: time="2026-01-23T23:56:07.355170530Z" level=info msg="StopPodSandbox for \"534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485\" returns successfully" Jan 23 23:56:07.359905 systemd[1]: run-netns-cni\x2dbebaa8d7\x2d1c0a\x2d8601\x2d60d0\x2d8b221fff2515.mount: Deactivated successfully. Jan 23 23:56:07.368427 containerd[2023]: time="2026-01-23T23:56:07.367816262Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-zhdzf,Uid:b627f7db-d96f-4cdc-9084-8b79e8e215fb,Namespace:calico-system,Attempt:1,}" Jan 23 23:56:07.534198 containerd[2023]: time="2026-01-23T23:56:07.530039175Z" level=info msg="StopPodSandbox for \"ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702\"" Jan 23 23:56:07.533755 systemd[1]: Started sshd@8-172.31.18.95:22-4.153.228.146:60664.service - OpenSSH per-connection server daemon (4.153.228.146:60664). Jan 23 23:56:07.613945 systemd-networkd[1935]: vxlan.calico: Gained IPv6LL Jan 23 23:56:07.741779 systemd-networkd[1935]: califad62ff3841: Gained IPv6LL Jan 23 23:56:07.872151 containerd[2023]: 2026-01-23 23:56:07.436 [WARNING][5362] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--95-k8s-coredns--66bc5c9577--5wd66-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"e585a60e-07e9-4e0a-95ee-73be5aa0422a", ResourceVersion:"1062", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-95", ContainerID:"994f66d0ed7f22508cd03cb4fefde728266a8c1ff99aaeba0e8153a097e9c877", Pod:"coredns-66bc5c9577-5wd66", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali2da03103d86", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:07.872151 containerd[2023]: 2026-01-23 23:56:07.438 [INFO][5362] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" Jan 23 23:56:07.872151 containerd[2023]: 2026-01-23 23:56:07.438 [INFO][5362] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" iface="eth0" netns="" Jan 23 23:56:07.872151 containerd[2023]: 2026-01-23 23:56:07.438 [INFO][5362] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" Jan 23 23:56:07.872151 containerd[2023]: 2026-01-23 23:56:07.438 [INFO][5362] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" Jan 23 23:56:07.872151 containerd[2023]: 2026-01-23 23:56:07.756 [INFO][5387] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" HandleID="k8s-pod-network.4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" Workload="ip--172--31--18--95-k8s-coredns--66bc5c9577--5wd66-eth0" Jan 23 23:56:07.872151 containerd[2023]: 2026-01-23 23:56:07.763 [INFO][5387] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:07.872151 containerd[2023]: 2026-01-23 23:56:07.763 [INFO][5387] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:07.872151 containerd[2023]: 2026-01-23 23:56:07.818 [WARNING][5387] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" HandleID="k8s-pod-network.4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" Workload="ip--172--31--18--95-k8s-coredns--66bc5c9577--5wd66-eth0" Jan 23 23:56:07.872151 containerd[2023]: 2026-01-23 23:56:07.818 [INFO][5387] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" HandleID="k8s-pod-network.4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" Workload="ip--172--31--18--95-k8s-coredns--66bc5c9577--5wd66-eth0" Jan 23 23:56:07.872151 containerd[2023]: 2026-01-23 23:56:07.826 [INFO][5387] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:07.872151 containerd[2023]: 2026-01-23 23:56:07.863 [INFO][5362] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c" Jan 23 23:56:07.876391 containerd[2023]: time="2026-01-23T23:56:07.872097604Z" level=info msg="TearDown network for sandbox \"4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c\" successfully" Jan 23 23:56:07.883693 containerd[2023]: time="2026-01-23T23:56:07.883387240Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:56:07.883693 containerd[2023]: time="2026-01-23T23:56:07.883523956Z" level=info msg="RemovePodSandbox \"4ed380a18950d58bfc27129517d4adab52077b69c9079f0354ccce5d582ab01c\" returns successfully" Jan 23 23:56:07.885429 containerd[2023]: time="2026-01-23T23:56:07.885357868Z" level=info msg="StopPodSandbox for \"949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191\"" Jan 23 23:56:08.055307 systemd-networkd[1935]: cali151d7460f0f: Link UP Jan 23 23:56:08.057299 systemd-networkd[1935]: cali151d7460f0f: Gained carrier Jan 23 23:56:08.089647 kubelet[3333]: E0123 23:56:08.089220 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bsxtr" podUID="16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0" Jan 23 23:56:08.104363 sshd[5405]: Accepted publickey for core from 4.153.228.146 port 60664 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:08.116710 sshd[5405]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:08.140556 containerd[2023]: 2026-01-23 23:56:07.455 [INFO][5343] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--79vx7-eth0 calico-apiserver-58854c8f84- calico-apiserver 3ac30c2d-dd8c-4060-a356-77e0062bc1c4 1080 0 2026-01-23 23:55:26 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58854c8f84 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-95 calico-apiserver-58854c8f84-79vx7 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali151d7460f0f [] [] }} ContainerID="0f1347573791cc04f9335d5e84b0e988435e7d3a1b0ed95bd83f409d6d86ce75" Namespace="calico-apiserver" Pod="calico-apiserver-58854c8f84-79vx7" WorkloadEndpoint="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--79vx7-" Jan 23 23:56:08.140556 containerd[2023]: 2026-01-23 23:56:07.456 [INFO][5343] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="0f1347573791cc04f9335d5e84b0e988435e7d3a1b0ed95bd83f409d6d86ce75" Namespace="calico-apiserver" Pod="calico-apiserver-58854c8f84-79vx7" WorkloadEndpoint="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--79vx7-eth0" Jan 23 23:56:08.140556 containerd[2023]: 2026-01-23 23:56:07.850 [INFO][5397] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="0f1347573791cc04f9335d5e84b0e988435e7d3a1b0ed95bd83f409d6d86ce75" HandleID="k8s-pod-network.0f1347573791cc04f9335d5e84b0e988435e7d3a1b0ed95bd83f409d6d86ce75" Workload="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--79vx7-eth0" Jan 23 23:56:08.140556 containerd[2023]: 2026-01-23 23:56:07.856 [INFO][5397] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="0f1347573791cc04f9335d5e84b0e988435e7d3a1b0ed95bd83f409d6d86ce75" HandleID="k8s-pod-network.0f1347573791cc04f9335d5e84b0e988435e7d3a1b0ed95bd83f409d6d86ce75" Workload="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--79vx7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000315d60), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-95", "pod":"calico-apiserver-58854c8f84-79vx7", "timestamp":"2026-01-23 23:56:07.85093522 +0000 UTC"}, Hostname:"ip-172-31-18-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:56:08.140556 containerd[2023]: 2026-01-23 23:56:07.858 [INFO][5397] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:08.140556 containerd[2023]: 2026-01-23 23:56:07.859 [INFO][5397] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:08.140556 containerd[2023]: 2026-01-23 23:56:07.861 [INFO][5397] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-95' Jan 23 23:56:08.140556 containerd[2023]: 2026-01-23 23:56:07.907 [INFO][5397] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.0f1347573791cc04f9335d5e84b0e988435e7d3a1b0ed95bd83f409d6d86ce75" host="ip-172-31-18-95" Jan 23 23:56:08.140556 containerd[2023]: 2026-01-23 23:56:07.916 [INFO][5397] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-95" Jan 23 23:56:08.140556 containerd[2023]: 2026-01-23 23:56:07.925 [INFO][5397] ipam/ipam.go 511: Trying affinity for 192.168.125.128/26 host="ip-172-31-18-95" Jan 23 23:56:08.140556 containerd[2023]: 2026-01-23 23:56:07.931 [INFO][5397] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.128/26 host="ip-172-31-18-95" Jan 23 23:56:08.140556 containerd[2023]: 2026-01-23 23:56:07.938 [INFO][5397] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="ip-172-31-18-95" Jan 23 23:56:08.140556 containerd[2023]: 2026-01-23 23:56:07.940 [INFO][5397] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.0f1347573791cc04f9335d5e84b0e988435e7d3a1b0ed95bd83f409d6d86ce75" host="ip-172-31-18-95" Jan 23 23:56:08.140556 containerd[2023]: 2026-01-23 23:56:07.945 [INFO][5397] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.0f1347573791cc04f9335d5e84b0e988435e7d3a1b0ed95bd83f409d6d86ce75 Jan 23 23:56:08.140556 containerd[2023]: 2026-01-23 23:56:07.977 [INFO][5397] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.0f1347573791cc04f9335d5e84b0e988435e7d3a1b0ed95bd83f409d6d86ce75" host="ip-172-31-18-95" Jan 23 23:56:08.140556 containerd[2023]: 2026-01-23 23:56:08.021 [INFO][5397] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.125.133/26] block=192.168.125.128/26 handle="k8s-pod-network.0f1347573791cc04f9335d5e84b0e988435e7d3a1b0ed95bd83f409d6d86ce75" host="ip-172-31-18-95" Jan 23 23:56:08.140556 containerd[2023]: 2026-01-23 23:56:08.022 [INFO][5397] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.133/26] handle="k8s-pod-network.0f1347573791cc04f9335d5e84b0e988435e7d3a1b0ed95bd83f409d6d86ce75" host="ip-172-31-18-95" Jan 23 23:56:08.140556 containerd[2023]: 2026-01-23 23:56:08.022 [INFO][5397] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:08.140556 containerd[2023]: 2026-01-23 23:56:08.023 [INFO][5397] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.125.133/26] IPv6=[] ContainerID="0f1347573791cc04f9335d5e84b0e988435e7d3a1b0ed95bd83f409d6d86ce75" HandleID="k8s-pod-network.0f1347573791cc04f9335d5e84b0e988435e7d3a1b0ed95bd83f409d6d86ce75" Workload="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--79vx7-eth0" Jan 23 23:56:08.144463 containerd[2023]: 2026-01-23 23:56:08.047 [INFO][5343] cni-plugin/k8s.go 418: Populated endpoint ContainerID="0f1347573791cc04f9335d5e84b0e988435e7d3a1b0ed95bd83f409d6d86ce75" Namespace="calico-apiserver" Pod="calico-apiserver-58854c8f84-79vx7" WorkloadEndpoint="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--79vx7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--79vx7-eth0", GenerateName:"calico-apiserver-58854c8f84-", Namespace:"calico-apiserver", SelfLink:"", UID:"3ac30c2d-dd8c-4060-a356-77e0062bc1c4", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58854c8f84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-95", ContainerID:"", Pod:"calico-apiserver-58854c8f84-79vx7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali151d7460f0f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:08.144463 containerd[2023]: 2026-01-23 23:56:08.048 [INFO][5343] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.133/32] ContainerID="0f1347573791cc04f9335d5e84b0e988435e7d3a1b0ed95bd83f409d6d86ce75" Namespace="calico-apiserver" Pod="calico-apiserver-58854c8f84-79vx7" WorkloadEndpoint="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--79vx7-eth0" Jan 23 23:56:08.144463 containerd[2023]: 2026-01-23 23:56:08.048 [INFO][5343] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali151d7460f0f ContainerID="0f1347573791cc04f9335d5e84b0e988435e7d3a1b0ed95bd83f409d6d86ce75" Namespace="calico-apiserver" Pod="calico-apiserver-58854c8f84-79vx7" WorkloadEndpoint="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--79vx7-eth0" Jan 23 23:56:08.144463 containerd[2023]: 2026-01-23 23:56:08.056 [INFO][5343] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="0f1347573791cc04f9335d5e84b0e988435e7d3a1b0ed95bd83f409d6d86ce75" Namespace="calico-apiserver" Pod="calico-apiserver-58854c8f84-79vx7" WorkloadEndpoint="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--79vx7-eth0" Jan 23 23:56:08.144463 containerd[2023]: 2026-01-23 23:56:08.058 [INFO][5343] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="0f1347573791cc04f9335d5e84b0e988435e7d3a1b0ed95bd83f409d6d86ce75" Namespace="calico-apiserver" Pod="calico-apiserver-58854c8f84-79vx7" WorkloadEndpoint="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--79vx7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--79vx7-eth0", GenerateName:"calico-apiserver-58854c8f84-", Namespace:"calico-apiserver", SelfLink:"", UID:"3ac30c2d-dd8c-4060-a356-77e0062bc1c4", ResourceVersion:"1080", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58854c8f84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-95", ContainerID:"0f1347573791cc04f9335d5e84b0e988435e7d3a1b0ed95bd83f409d6d86ce75", Pod:"calico-apiserver-58854c8f84-79vx7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali151d7460f0f", MAC:"96:b6:c1:12:53:a3", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:08.144463 containerd[2023]: 2026-01-23 23:56:08.128 [INFO][5343] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="0f1347573791cc04f9335d5e84b0e988435e7d3a1b0ed95bd83f409d6d86ce75" Namespace="calico-apiserver" Pod="calico-apiserver-58854c8f84-79vx7" WorkloadEndpoint="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--79vx7-eth0" Jan 23 23:56:08.147046 systemd-logind[1997]: New session 9 of user core. Jan 23 23:56:08.154334 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 23:56:08.279765 containerd[2023]: time="2026-01-23T23:56:08.278142650Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:08.279765 containerd[2023]: time="2026-01-23T23:56:08.278256002Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:08.279765 containerd[2023]: time="2026-01-23T23:56:08.278300630Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:08.279765 containerd[2023]: time="2026-01-23T23:56:08.278487170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:08.335074 systemd-networkd[1935]: caliaba3d10cdf0: Link UP Jan 23 23:56:08.341948 systemd-networkd[1935]: caliaba3d10cdf0: Gained carrier Jan 23 23:56:08.481409 systemd[1]: run-containerd-runc-k8s.io-0f1347573791cc04f9335d5e84b0e988435e7d3a1b0ed95bd83f409d6d86ce75-runc.CRQ05N.mount: Deactivated successfully. Jan 23 23:56:08.541114 containerd[2023]: 2026-01-23 23:56:07.578 [INFO][5369] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--dfgrz-eth0 calico-apiserver-58854c8f84- calico-apiserver 88895574-7d47-4441-9e70-eebbca18d915 1081 0 2026-01-23 23:55:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:58854c8f84 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-95 calico-apiserver-58854c8f84-dfgrz eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] caliaba3d10cdf0 [] [] }} ContainerID="ce57427962e993fd428c982a6183ae2fe35a325802fe0ab98d09b826838c0c65" Namespace="calico-apiserver" Pod="calico-apiserver-58854c8f84-dfgrz" WorkloadEndpoint="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--dfgrz-" Jan 23 23:56:08.541114 containerd[2023]: 2026-01-23 23:56:07.578 [INFO][5369] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ce57427962e993fd428c982a6183ae2fe35a325802fe0ab98d09b826838c0c65" Namespace="calico-apiserver" Pod="calico-apiserver-58854c8f84-dfgrz" WorkloadEndpoint="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--dfgrz-eth0" Jan 23 23:56:08.541114 containerd[2023]: 2026-01-23 23:56:07.855 [INFO][5421] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ce57427962e993fd428c982a6183ae2fe35a325802fe0ab98d09b826838c0c65" HandleID="k8s-pod-network.ce57427962e993fd428c982a6183ae2fe35a325802fe0ab98d09b826838c0c65" Workload="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--dfgrz-eth0" Jan 23 23:56:08.541114 containerd[2023]: 2026-01-23 23:56:07.858 [INFO][5421] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ce57427962e993fd428c982a6183ae2fe35a325802fe0ab98d09b826838c0c65" HandleID="k8s-pod-network.ce57427962e993fd428c982a6183ae2fe35a325802fe0ab98d09b826838c0c65" Workload="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--dfgrz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000325910), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-95", "pod":"calico-apiserver-58854c8f84-dfgrz", "timestamp":"2026-01-23 23:56:07.855124972 +0000 UTC"}, Hostname:"ip-172-31-18-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:56:08.541114 containerd[2023]: 2026-01-23 23:56:07.859 [INFO][5421] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:08.541114 containerd[2023]: 2026-01-23 23:56:08.022 [INFO][5421] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:08.541114 containerd[2023]: 2026-01-23 23:56:08.022 [INFO][5421] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-95' Jan 23 23:56:08.541114 containerd[2023]: 2026-01-23 23:56:08.124 [INFO][5421] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ce57427962e993fd428c982a6183ae2fe35a325802fe0ab98d09b826838c0c65" host="ip-172-31-18-95" Jan 23 23:56:08.541114 containerd[2023]: 2026-01-23 23:56:08.190 [INFO][5421] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-95" Jan 23 23:56:08.541114 containerd[2023]: 2026-01-23 23:56:08.215 [INFO][5421] ipam/ipam.go 511: Trying affinity for 192.168.125.128/26 host="ip-172-31-18-95" Jan 23 23:56:08.541114 containerd[2023]: 2026-01-23 23:56:08.225 [INFO][5421] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.128/26 host="ip-172-31-18-95" Jan 23 23:56:08.541114 containerd[2023]: 2026-01-23 23:56:08.239 [INFO][5421] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="ip-172-31-18-95" Jan 23 23:56:08.541114 containerd[2023]: 2026-01-23 23:56:08.239 [INFO][5421] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.ce57427962e993fd428c982a6183ae2fe35a325802fe0ab98d09b826838c0c65" host="ip-172-31-18-95" Jan 23 23:56:08.541114 containerd[2023]: 2026-01-23 23:56:08.254 [INFO][5421] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ce57427962e993fd428c982a6183ae2fe35a325802fe0ab98d09b826838c0c65 Jan 23 23:56:08.541114 containerd[2023]: 2026-01-23 23:56:08.277 [INFO][5421] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.ce57427962e993fd428c982a6183ae2fe35a325802fe0ab98d09b826838c0c65" host="ip-172-31-18-95" Jan 23 23:56:08.541114 containerd[2023]: 2026-01-23 23:56:08.308 [INFO][5421] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.125.134/26] block=192.168.125.128/26 handle="k8s-pod-network.ce57427962e993fd428c982a6183ae2fe35a325802fe0ab98d09b826838c0c65" host="ip-172-31-18-95" Jan 23 23:56:08.541114 containerd[2023]: 2026-01-23 23:56:08.308 [INFO][5421] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.134/26] handle="k8s-pod-network.ce57427962e993fd428c982a6183ae2fe35a325802fe0ab98d09b826838c0c65" host="ip-172-31-18-95" Jan 23 23:56:08.541114 containerd[2023]: 2026-01-23 23:56:08.308 [INFO][5421] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:08.541114 containerd[2023]: 2026-01-23 23:56:08.309 [INFO][5421] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.125.134/26] IPv6=[] ContainerID="ce57427962e993fd428c982a6183ae2fe35a325802fe0ab98d09b826838c0c65" HandleID="k8s-pod-network.ce57427962e993fd428c982a6183ae2fe35a325802fe0ab98d09b826838c0c65" Workload="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--dfgrz-eth0" Jan 23 23:56:08.544345 containerd[2023]: 2026-01-23 23:56:08.321 [INFO][5369] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ce57427962e993fd428c982a6183ae2fe35a325802fe0ab98d09b826838c0c65" Namespace="calico-apiserver" Pod="calico-apiserver-58854c8f84-dfgrz" WorkloadEndpoint="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--dfgrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--dfgrz-eth0", GenerateName:"calico-apiserver-58854c8f84-", Namespace:"calico-apiserver", SelfLink:"", UID:"88895574-7d47-4441-9e70-eebbca18d915", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58854c8f84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-95", ContainerID:"", Pod:"calico-apiserver-58854c8f84-dfgrz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaba3d10cdf0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:08.544345 containerd[2023]: 2026-01-23 23:56:08.321 [INFO][5369] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.134/32] ContainerID="ce57427962e993fd428c982a6183ae2fe35a325802fe0ab98d09b826838c0c65" Namespace="calico-apiserver" Pod="calico-apiserver-58854c8f84-dfgrz" WorkloadEndpoint="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--dfgrz-eth0" Jan 23 23:56:08.544345 containerd[2023]: 2026-01-23 23:56:08.321 [INFO][5369] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliaba3d10cdf0 ContainerID="ce57427962e993fd428c982a6183ae2fe35a325802fe0ab98d09b826838c0c65" Namespace="calico-apiserver" Pod="calico-apiserver-58854c8f84-dfgrz" WorkloadEndpoint="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--dfgrz-eth0" Jan 23 23:56:08.544345 containerd[2023]: 2026-01-23 23:56:08.359 [INFO][5369] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ce57427962e993fd428c982a6183ae2fe35a325802fe0ab98d09b826838c0c65" Namespace="calico-apiserver" Pod="calico-apiserver-58854c8f84-dfgrz" WorkloadEndpoint="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--dfgrz-eth0" Jan 23 23:56:08.544345 containerd[2023]: 2026-01-23 23:56:08.386 [INFO][5369] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ce57427962e993fd428c982a6183ae2fe35a325802fe0ab98d09b826838c0c65" Namespace="calico-apiserver" Pod="calico-apiserver-58854c8f84-dfgrz" WorkloadEndpoint="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--dfgrz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--dfgrz-eth0", GenerateName:"calico-apiserver-58854c8f84-", Namespace:"calico-apiserver", SelfLink:"", UID:"88895574-7d47-4441-9e70-eebbca18d915", ResourceVersion:"1081", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58854c8f84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-95", ContainerID:"ce57427962e993fd428c982a6183ae2fe35a325802fe0ab98d09b826838c0c65", Pod:"calico-apiserver-58854c8f84-dfgrz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaba3d10cdf0", MAC:"ce:60:e4:52:86:4c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:08.544345 containerd[2023]: 2026-01-23 23:56:08.506 [INFO][5369] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ce57427962e993fd428c982a6183ae2fe35a325802fe0ab98d09b826838c0c65" Namespace="calico-apiserver" Pod="calico-apiserver-58854c8f84-dfgrz" WorkloadEndpoint="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--dfgrz-eth0" Jan 23 23:56:08.545249 systemd[1]: Started cri-containerd-0f1347573791cc04f9335d5e84b0e988435e7d3a1b0ed95bd83f409d6d86ce75.scope - libcontainer container 0f1347573791cc04f9335d5e84b0e988435e7d3a1b0ed95bd83f409d6d86ce75. Jan 23 23:56:08.657796 containerd[2023]: time="2026-01-23T23:56:08.654457528Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:08.657796 containerd[2023]: time="2026-01-23T23:56:08.654581680Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:08.657796 containerd[2023]: time="2026-01-23T23:56:08.654620716Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:08.657796 containerd[2023]: time="2026-01-23T23:56:08.654794596Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:08.751615 systemd[1]: Started cri-containerd-ce57427962e993fd428c982a6183ae2fe35a325802fe0ab98d09b826838c0c65.scope - libcontainer container ce57427962e993fd428c982a6183ae2fe35a325802fe0ab98d09b826838c0c65. Jan 23 23:56:08.775684 systemd-networkd[1935]: cali05baba26f89: Link UP Jan 23 23:56:08.776185 systemd-networkd[1935]: cali05baba26f89: Gained carrier Jan 23 23:56:08.828687 containerd[2023]: 2026-01-23 23:56:07.668 [INFO][5381] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--95-k8s-goldmane--7c778bb748--zhdzf-eth0 goldmane-7c778bb748- calico-system b627f7db-d96f-4cdc-9084-8b79e8e215fb 1083 0 2026-01-23 23:55:35 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-18-95 goldmane-7c778bb748-zhdzf eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali05baba26f89 [] [] }} ContainerID="565aae90d756d927c88dd2c2777ae0c0d0140164804ffaf4e154c2ba70645188" Namespace="calico-system" Pod="goldmane-7c778bb748-zhdzf" WorkloadEndpoint="ip--172--31--18--95-k8s-goldmane--7c778bb748--zhdzf-" Jan 23 23:56:08.828687 containerd[2023]: 2026-01-23 23:56:07.673 [INFO][5381] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="565aae90d756d927c88dd2c2777ae0c0d0140164804ffaf4e154c2ba70645188" Namespace="calico-system" Pod="goldmane-7c778bb748-zhdzf" WorkloadEndpoint="ip--172--31--18--95-k8s-goldmane--7c778bb748--zhdzf-eth0" Jan 23 23:56:08.828687 containerd[2023]: 2026-01-23 23:56:07.919 [INFO][5427] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="565aae90d756d927c88dd2c2777ae0c0d0140164804ffaf4e154c2ba70645188" HandleID="k8s-pod-network.565aae90d756d927c88dd2c2777ae0c0d0140164804ffaf4e154c2ba70645188" Workload="ip--172--31--18--95-k8s-goldmane--7c778bb748--zhdzf-eth0" Jan 23 23:56:08.828687 containerd[2023]: 2026-01-23 23:56:07.921 [INFO][5427] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="565aae90d756d927c88dd2c2777ae0c0d0140164804ffaf4e154c2ba70645188" HandleID="k8s-pod-network.565aae90d756d927c88dd2c2777ae0c0d0140164804ffaf4e154c2ba70645188" Workload="ip--172--31--18--95-k8s-goldmane--7c778bb748--zhdzf-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c180), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-95", "pod":"goldmane-7c778bb748-zhdzf", "timestamp":"2026-01-23 23:56:07.919541596 +0000 UTC"}, Hostname:"ip-172-31-18-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:56:08.828687 containerd[2023]: 2026-01-23 23:56:07.922 [INFO][5427] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:08.828687 containerd[2023]: 2026-01-23 23:56:08.312 [INFO][5427] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:08.828687 containerd[2023]: 2026-01-23 23:56:08.312 [INFO][5427] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-95' Jan 23 23:56:08.828687 containerd[2023]: 2026-01-23 23:56:08.377 [INFO][5427] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.565aae90d756d927c88dd2c2777ae0c0d0140164804ffaf4e154c2ba70645188" host="ip-172-31-18-95" Jan 23 23:56:08.828687 containerd[2023]: 2026-01-23 23:56:08.415 [INFO][5427] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-95" Jan 23 23:56:08.828687 containerd[2023]: 2026-01-23 23:56:08.526 [INFO][5427] ipam/ipam.go 511: Trying affinity for 192.168.125.128/26 host="ip-172-31-18-95" Jan 23 23:56:08.828687 containerd[2023]: 2026-01-23 23:56:08.559 [INFO][5427] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.128/26 host="ip-172-31-18-95" Jan 23 23:56:08.828687 containerd[2023]: 2026-01-23 23:56:08.580 [INFO][5427] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="ip-172-31-18-95" Jan 23 23:56:08.828687 containerd[2023]: 2026-01-23 23:56:08.584 [INFO][5427] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.565aae90d756d927c88dd2c2777ae0c0d0140164804ffaf4e154c2ba70645188" host="ip-172-31-18-95" Jan 23 23:56:08.828687 containerd[2023]: 2026-01-23 23:56:08.604 [INFO][5427] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.565aae90d756d927c88dd2c2777ae0c0d0140164804ffaf4e154c2ba70645188 Jan 23 23:56:08.828687 containerd[2023]: 2026-01-23 23:56:08.708 [INFO][5427] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.565aae90d756d927c88dd2c2777ae0c0d0140164804ffaf4e154c2ba70645188" host="ip-172-31-18-95" Jan 23 23:56:08.828687 containerd[2023]: 2026-01-23 23:56:08.749 [INFO][5427] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.125.135/26] block=192.168.125.128/26 handle="k8s-pod-network.565aae90d756d927c88dd2c2777ae0c0d0140164804ffaf4e154c2ba70645188" host="ip-172-31-18-95" Jan 23 23:56:08.828687 containerd[2023]: 2026-01-23 23:56:08.749 [INFO][5427] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.135/26] handle="k8s-pod-network.565aae90d756d927c88dd2c2777ae0c0d0140164804ffaf4e154c2ba70645188" host="ip-172-31-18-95" Jan 23 23:56:08.828687 containerd[2023]: 2026-01-23 23:56:08.751 [INFO][5427] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:08.828687 containerd[2023]: 2026-01-23 23:56:08.751 [INFO][5427] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.125.135/26] IPv6=[] ContainerID="565aae90d756d927c88dd2c2777ae0c0d0140164804ffaf4e154c2ba70645188" HandleID="k8s-pod-network.565aae90d756d927c88dd2c2777ae0c0d0140164804ffaf4e154c2ba70645188" Workload="ip--172--31--18--95-k8s-goldmane--7c778bb748--zhdzf-eth0" Jan 23 23:56:08.829883 containerd[2023]: 2026-01-23 23:56:08.769 [INFO][5381] cni-plugin/k8s.go 418: Populated endpoint ContainerID="565aae90d756d927c88dd2c2777ae0c0d0140164804ffaf4e154c2ba70645188" Namespace="calico-system" Pod="goldmane-7c778bb748-zhdzf" WorkloadEndpoint="ip--172--31--18--95-k8s-goldmane--7c778bb748--zhdzf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--95-k8s-goldmane--7c778bb748--zhdzf-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"b627f7db-d96f-4cdc-9084-8b79e8e215fb", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-95", ContainerID:"", Pod:"goldmane-7c778bb748-zhdzf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.125.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali05baba26f89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:08.829883 containerd[2023]: 2026-01-23 23:56:08.769 [INFO][5381] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.135/32] ContainerID="565aae90d756d927c88dd2c2777ae0c0d0140164804ffaf4e154c2ba70645188" Namespace="calico-system" Pod="goldmane-7c778bb748-zhdzf" WorkloadEndpoint="ip--172--31--18--95-k8s-goldmane--7c778bb748--zhdzf-eth0" Jan 23 23:56:08.829883 containerd[2023]: 2026-01-23 23:56:08.769 [INFO][5381] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali05baba26f89 ContainerID="565aae90d756d927c88dd2c2777ae0c0d0140164804ffaf4e154c2ba70645188" Namespace="calico-system" Pod="goldmane-7c778bb748-zhdzf" WorkloadEndpoint="ip--172--31--18--95-k8s-goldmane--7c778bb748--zhdzf-eth0" Jan 23 23:56:08.829883 containerd[2023]: 2026-01-23 23:56:08.775 [INFO][5381] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="565aae90d756d927c88dd2c2777ae0c0d0140164804ffaf4e154c2ba70645188" Namespace="calico-system" Pod="goldmane-7c778bb748-zhdzf" WorkloadEndpoint="ip--172--31--18--95-k8s-goldmane--7c778bb748--zhdzf-eth0" Jan 23 23:56:08.829883 containerd[2023]: 2026-01-23 23:56:08.780 [INFO][5381] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="565aae90d756d927c88dd2c2777ae0c0d0140164804ffaf4e154c2ba70645188" Namespace="calico-system" Pod="goldmane-7c778bb748-zhdzf" WorkloadEndpoint="ip--172--31--18--95-k8s-goldmane--7c778bb748--zhdzf-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--95-k8s-goldmane--7c778bb748--zhdzf-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"b627f7db-d96f-4cdc-9084-8b79e8e215fb", ResourceVersion:"1083", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-95", ContainerID:"565aae90d756d927c88dd2c2777ae0c0d0140164804ffaf4e154c2ba70645188", Pod:"goldmane-7c778bb748-zhdzf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.125.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali05baba26f89", MAC:"a2:49:3e:d1:10:45", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:08.829883 containerd[2023]: 2026-01-23 23:56:08.820 [INFO][5381] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="565aae90d756d927c88dd2c2777ae0c0d0140164804ffaf4e154c2ba70645188" Namespace="calico-system" Pod="goldmane-7c778bb748-zhdzf" WorkloadEndpoint="ip--172--31--18--95-k8s-goldmane--7c778bb748--zhdzf-eth0" Jan 23 23:56:08.847863 sshd[5405]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:08.864973 systemd[1]: sshd@8-172.31.18.95:22-4.153.228.146:60664.service: Deactivated successfully. Jan 23 23:56:08.872606 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 23:56:08.879008 systemd-logind[1997]: Session 9 logged out. Waiting for processes to exit. Jan 23 23:56:08.879716 containerd[2023]: 2026-01-23 23:56:07.964 [INFO][5415] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" Jan 23 23:56:08.879716 containerd[2023]: 2026-01-23 23:56:07.964 [INFO][5415] cni-plugin/dataplane_linux.go 559: Deleting workload's device in netns. ContainerID="ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" iface="eth0" netns="/var/run/netns/cni-a60fdda6-17e7-231f-08b2-39826acc08f5" Jan 23 23:56:08.879716 containerd[2023]: 2026-01-23 23:56:07.966 [INFO][5415] cni-plugin/dataplane_linux.go 570: Entered netns, deleting veth. ContainerID="ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" iface="eth0" netns="/var/run/netns/cni-a60fdda6-17e7-231f-08b2-39826acc08f5" Jan 23 23:56:08.879716 containerd[2023]: 2026-01-23 23:56:07.979 [INFO][5415] cni-plugin/dataplane_linux.go 597: Workload's veth was already gone. Nothing to do. ContainerID="ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" iface="eth0" netns="/var/run/netns/cni-a60fdda6-17e7-231f-08b2-39826acc08f5" Jan 23 23:56:08.879716 containerd[2023]: 2026-01-23 23:56:07.982 [INFO][5415] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" Jan 23 23:56:08.879716 containerd[2023]: 2026-01-23 23:56:07.982 [INFO][5415] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" Jan 23 23:56:08.879716 containerd[2023]: 2026-01-23 23:56:08.215 [INFO][5455] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" HandleID="k8s-pod-network.ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" Workload="ip--172--31--18--95-k8s-coredns--66bc5c9577--jl7gt-eth0" Jan 23 23:56:08.879716 containerd[2023]: 2026-01-23 23:56:08.215 [INFO][5455] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:08.879716 containerd[2023]: 2026-01-23 23:56:08.751 [INFO][5455] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:08.879716 containerd[2023]: 2026-01-23 23:56:08.825 [WARNING][5455] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" HandleID="k8s-pod-network.ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" Workload="ip--172--31--18--95-k8s-coredns--66bc5c9577--jl7gt-eth0" Jan 23 23:56:08.879716 containerd[2023]: 2026-01-23 23:56:08.825 [INFO][5455] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" HandleID="k8s-pod-network.ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" Workload="ip--172--31--18--95-k8s-coredns--66bc5c9577--jl7gt-eth0" Jan 23 23:56:08.879716 containerd[2023]: 2026-01-23 23:56:08.851 [INFO][5455] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:08.879716 containerd[2023]: 2026-01-23 23:56:08.866 [INFO][5415] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" Jan 23 23:56:08.882729 containerd[2023]: time="2026-01-23T23:56:08.880719725Z" level=info msg="TearDown network for sandbox \"ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702\" successfully" Jan 23 23:56:08.882729 containerd[2023]: time="2026-01-23T23:56:08.880766513Z" level=info msg="StopPodSandbox for \"ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702\" returns successfully" Jan 23 23:56:08.886058 systemd-logind[1997]: Removed session 9. Jan 23 23:56:08.888024 containerd[2023]: time="2026-01-23T23:56:08.887953529Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jl7gt,Uid:75d141a9-546a-4b46-adcf-a6cd7a6e3073,Namespace:kube-system,Attempt:1,}" Jan 23 23:56:08.949592 containerd[2023]: 2026-01-23 23:56:08.188 [WARNING][5450] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" WorkloadEndpoint="ip--172--31--18--95-k8s-whisker--675d86896d--jvm4v-eth0" Jan 23 23:56:08.949592 containerd[2023]: 2026-01-23 23:56:08.190 [INFO][5450] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" Jan 23 23:56:08.949592 containerd[2023]: 2026-01-23 23:56:08.191 [INFO][5450] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" iface="eth0" netns="" Jan 23 23:56:08.949592 containerd[2023]: 2026-01-23 23:56:08.192 [INFO][5450] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" Jan 23 23:56:08.949592 containerd[2023]: 2026-01-23 23:56:08.192 [INFO][5450] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" Jan 23 23:56:08.949592 containerd[2023]: 2026-01-23 23:56:08.449 [INFO][5470] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" HandleID="k8s-pod-network.949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" Workload="ip--172--31--18--95-k8s-whisker--675d86896d--jvm4v-eth0" Jan 23 23:56:08.949592 containerd[2023]: 2026-01-23 23:56:08.455 [INFO][5470] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:08.949592 containerd[2023]: 2026-01-23 23:56:08.851 [INFO][5470] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:08.949592 containerd[2023]: 2026-01-23 23:56:08.907 [WARNING][5470] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" HandleID="k8s-pod-network.949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" Workload="ip--172--31--18--95-k8s-whisker--675d86896d--jvm4v-eth0" Jan 23 23:56:08.949592 containerd[2023]: 2026-01-23 23:56:08.907 [INFO][5470] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" HandleID="k8s-pod-network.949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" Workload="ip--172--31--18--95-k8s-whisker--675d86896d--jvm4v-eth0" Jan 23 23:56:08.949592 containerd[2023]: 2026-01-23 23:56:08.916 [INFO][5470] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:08.949592 containerd[2023]: 2026-01-23 23:56:08.929 [INFO][5450] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" Jan 23 23:56:08.949592 containerd[2023]: time="2026-01-23T23:56:08.949561458Z" level=info msg="TearDown network for sandbox \"949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191\" successfully" Jan 23 23:56:08.953415 containerd[2023]: time="2026-01-23T23:56:08.949598634Z" level=info msg="StopPodSandbox for \"949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191\" returns successfully" Jan 23 23:56:08.954049 containerd[2023]: time="2026-01-23T23:56:08.953868690Z" level=info msg="RemovePodSandbox for \"949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191\"" Jan 23 23:56:08.954257 containerd[2023]: time="2026-01-23T23:56:08.954224610Z" level=info msg="Forcibly stopping sandbox \"949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191\"" Jan 23 23:56:09.012161 containerd[2023]: time="2026-01-23T23:56:09.011198342Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:09.012161 containerd[2023]: time="2026-01-23T23:56:09.011290526Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:09.012161 containerd[2023]: time="2026-01-23T23:56:09.011316170Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:09.012161 containerd[2023]: time="2026-01-23T23:56:09.011473622Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:09.083294 systemd[1]: Started cri-containerd-565aae90d756d927c88dd2c2777ae0c0d0140164804ffaf4e154c2ba70645188.scope - libcontainer container 565aae90d756d927c88dd2c2777ae0c0d0140164804ffaf4e154c2ba70645188. Jan 23 23:56:09.303197 systemd[1]: run-netns-cni\x2da60fdda6\x2d17e7\x2d231f\x2d08b2\x2d39826acc08f5.mount: Deactivated successfully. Jan 23 23:56:09.348676 containerd[2023]: 2026-01-23 23:56:09.167 [WARNING][5599] cni-plugin/k8s.go 598: WorkloadEndpoint does not exist in the datastore, moving forward with the clean up ContainerID="949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" WorkloadEndpoint="ip--172--31--18--95-k8s-whisker--675d86896d--jvm4v-eth0" Jan 23 23:56:09.348676 containerd[2023]: 2026-01-23 23:56:09.168 [INFO][5599] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" Jan 23 23:56:09.348676 containerd[2023]: 2026-01-23 23:56:09.170 [INFO][5599] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" iface="eth0" netns="" Jan 23 23:56:09.348676 containerd[2023]: 2026-01-23 23:56:09.171 [INFO][5599] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" Jan 23 23:56:09.348676 containerd[2023]: 2026-01-23 23:56:09.173 [INFO][5599] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" Jan 23 23:56:09.348676 containerd[2023]: 2026-01-23 23:56:09.261 [INFO][5635] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" HandleID="k8s-pod-network.949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" Workload="ip--172--31--18--95-k8s-whisker--675d86896d--jvm4v-eth0" Jan 23 23:56:09.348676 containerd[2023]: 2026-01-23 23:56:09.262 [INFO][5635] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:09.348676 containerd[2023]: 2026-01-23 23:56:09.262 [INFO][5635] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:09.348676 containerd[2023]: 2026-01-23 23:56:09.323 [WARNING][5635] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" HandleID="k8s-pod-network.949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" Workload="ip--172--31--18--95-k8s-whisker--675d86896d--jvm4v-eth0" Jan 23 23:56:09.348676 containerd[2023]: 2026-01-23 23:56:09.323 [INFO][5635] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" HandleID="k8s-pod-network.949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" Workload="ip--172--31--18--95-k8s-whisker--675d86896d--jvm4v-eth0" Jan 23 23:56:09.348676 containerd[2023]: 2026-01-23 23:56:09.331 [INFO][5635] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:09.348676 containerd[2023]: 2026-01-23 23:56:09.338 [INFO][5599] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191" Jan 23 23:56:09.348676 containerd[2023]: time="2026-01-23T23:56:09.348632032Z" level=info msg="TearDown network for sandbox \"949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191\" successfully" Jan 23 23:56:09.361647 containerd[2023]: time="2026-01-23T23:56:09.360779260Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58854c8f84-79vx7,Uid:3ac30c2d-dd8c-4060-a356-77e0062bc1c4,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"0f1347573791cc04f9335d5e84b0e988435e7d3a1b0ed95bd83f409d6d86ce75\"" Jan 23 23:56:09.370799 containerd[2023]: time="2026-01-23T23:56:09.370637224Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:56:09.370799 containerd[2023]: time="2026-01-23T23:56:09.370741336Z" level=info msg="RemovePodSandbox \"949e1ef8b971ec1635aeee641eb8c8e8c30778749d73da57bc32f781eb065191\" returns successfully" Jan 23 23:56:09.373698 containerd[2023]: time="2026-01-23T23:56:09.373432264Z" level=info msg="StopPodSandbox for \"13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059\"" Jan 23 23:56:09.382993 containerd[2023]: time="2026-01-23T23:56:09.381472432Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:56:09.406446 systemd-networkd[1935]: cali151d7460f0f: Gained IPv6LL Jan 23 23:56:09.451709 containerd[2023]: time="2026-01-23T23:56:09.447817828Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-58854c8f84-dfgrz,Uid:88895574-7d47-4441-9e70-eebbca18d915,Namespace:calico-apiserver,Attempt:1,} returns sandbox id \"ce57427962e993fd428c982a6183ae2fe35a325802fe0ab98d09b826838c0c65\"" Jan 23 23:56:09.491582 containerd[2023]: time="2026-01-23T23:56:09.491520208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-zhdzf,Uid:b627f7db-d96f-4cdc-9084-8b79e8e215fb,Namespace:calico-system,Attempt:1,} returns sandbox id \"565aae90d756d927c88dd2c2777ae0c0d0140164804ffaf4e154c2ba70645188\"" Jan 23 23:56:09.568484 systemd-networkd[1935]: cali3218237363b: Link UP Jan 23 23:56:09.570929 systemd-networkd[1935]: cali3218237363b: Gained carrier Jan 23 23:56:09.612461 containerd[2023]: 2026-01-23 23:56:09.197 [INFO][5572] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--95-k8s-coredns--66bc5c9577--jl7gt-eth0 coredns-66bc5c9577- kube-system 75d141a9-546a-4b46-adcf-a6cd7a6e3073 1100 0 2026-01-23 23:55:13 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-95 coredns-66bc5c9577-jl7gt eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3218237363b [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="d6664e073b32a5a13551ce211a6149af648c7070e4e502e0b628472d820271dc" Namespace="kube-system" Pod="coredns-66bc5c9577-jl7gt" WorkloadEndpoint="ip--172--31--18--95-k8s-coredns--66bc5c9577--jl7gt-" Jan 23 23:56:09.612461 containerd[2023]: 2026-01-23 23:56:09.197 [INFO][5572] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d6664e073b32a5a13551ce211a6149af648c7070e4e502e0b628472d820271dc" Namespace="kube-system" Pod="coredns-66bc5c9577-jl7gt" WorkloadEndpoint="ip--172--31--18--95-k8s-coredns--66bc5c9577--jl7gt-eth0" Jan 23 23:56:09.612461 containerd[2023]: 2026-01-23 23:56:09.428 [INFO][5642] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d6664e073b32a5a13551ce211a6149af648c7070e4e502e0b628472d820271dc" HandleID="k8s-pod-network.d6664e073b32a5a13551ce211a6149af648c7070e4e502e0b628472d820271dc" Workload="ip--172--31--18--95-k8s-coredns--66bc5c9577--jl7gt-eth0" Jan 23 23:56:09.612461 containerd[2023]: 2026-01-23 23:56:09.431 [INFO][5642] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d6664e073b32a5a13551ce211a6149af648c7070e4e502e0b628472d820271dc" HandleID="k8s-pod-network.d6664e073b32a5a13551ce211a6149af648c7070e4e502e0b628472d820271dc" Workload="ip--172--31--18--95-k8s-coredns--66bc5c9577--jl7gt-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400036da10), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-95", "pod":"coredns-66bc5c9577-jl7gt", "timestamp":"2026-01-23 23:56:09.428870584 +0000 UTC"}, Hostname:"ip-172-31-18-95", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 23:56:09.612461 containerd[2023]: 2026-01-23 23:56:09.431 [INFO][5642] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:09.612461 containerd[2023]: 2026-01-23 23:56:09.431 [INFO][5642] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:09.612461 containerd[2023]: 2026-01-23 23:56:09.435 [INFO][5642] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-95' Jan 23 23:56:09.612461 containerd[2023]: 2026-01-23 23:56:09.471 [INFO][5642] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d6664e073b32a5a13551ce211a6149af648c7070e4e502e0b628472d820271dc" host="ip-172-31-18-95" Jan 23 23:56:09.612461 containerd[2023]: 2026-01-23 23:56:09.484 [INFO][5642] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-95" Jan 23 23:56:09.612461 containerd[2023]: 2026-01-23 23:56:09.502 [INFO][5642] ipam/ipam.go 511: Trying affinity for 192.168.125.128/26 host="ip-172-31-18-95" Jan 23 23:56:09.612461 containerd[2023]: 2026-01-23 23:56:09.510 [INFO][5642] ipam/ipam.go 158: Attempting to load block cidr=192.168.125.128/26 host="ip-172-31-18-95" Jan 23 23:56:09.612461 containerd[2023]: 2026-01-23 23:56:09.521 [INFO][5642] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.125.128/26 host="ip-172-31-18-95" Jan 23 23:56:09.612461 containerd[2023]: 2026-01-23 23:56:09.521 [INFO][5642] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.125.128/26 handle="k8s-pod-network.d6664e073b32a5a13551ce211a6149af648c7070e4e502e0b628472d820271dc" host="ip-172-31-18-95" Jan 23 23:56:09.612461 containerd[2023]: 2026-01-23 23:56:09.526 [INFO][5642] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d6664e073b32a5a13551ce211a6149af648c7070e4e502e0b628472d820271dc Jan 23 23:56:09.612461 containerd[2023]: 2026-01-23 23:56:09.537 [INFO][5642] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.125.128/26 handle="k8s-pod-network.d6664e073b32a5a13551ce211a6149af648c7070e4e502e0b628472d820271dc" host="ip-172-31-18-95" Jan 23 23:56:09.612461 containerd[2023]: 2026-01-23 23:56:09.553 [INFO][5642] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.125.136/26] block=192.168.125.128/26 handle="k8s-pod-network.d6664e073b32a5a13551ce211a6149af648c7070e4e502e0b628472d820271dc" host="ip-172-31-18-95" Jan 23 23:56:09.612461 containerd[2023]: 2026-01-23 23:56:09.554 [INFO][5642] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.125.136/26] handle="k8s-pod-network.d6664e073b32a5a13551ce211a6149af648c7070e4e502e0b628472d820271dc" host="ip-172-31-18-95" Jan 23 23:56:09.612461 containerd[2023]: 2026-01-23 23:56:09.554 [INFO][5642] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:09.612461 containerd[2023]: 2026-01-23 23:56:09.554 [INFO][5642] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.125.136/26] IPv6=[] ContainerID="d6664e073b32a5a13551ce211a6149af648c7070e4e502e0b628472d820271dc" HandleID="k8s-pod-network.d6664e073b32a5a13551ce211a6149af648c7070e4e502e0b628472d820271dc" Workload="ip--172--31--18--95-k8s-coredns--66bc5c9577--jl7gt-eth0" Jan 23 23:56:09.615167 containerd[2023]: 2026-01-23 23:56:09.559 [INFO][5572] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d6664e073b32a5a13551ce211a6149af648c7070e4e502e0b628472d820271dc" Namespace="kube-system" Pod="coredns-66bc5c9577-jl7gt" WorkloadEndpoint="ip--172--31--18--95-k8s-coredns--66bc5c9577--jl7gt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--95-k8s-coredns--66bc5c9577--jl7gt-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"75d141a9-546a-4b46-adcf-a6cd7a6e3073", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-95", ContainerID:"", Pod:"coredns-66bc5c9577-jl7gt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3218237363b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:09.615167 containerd[2023]: 2026-01-23 23:56:09.559 [INFO][5572] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.125.136/32] ContainerID="d6664e073b32a5a13551ce211a6149af648c7070e4e502e0b628472d820271dc" Namespace="kube-system" Pod="coredns-66bc5c9577-jl7gt" WorkloadEndpoint="ip--172--31--18--95-k8s-coredns--66bc5c9577--jl7gt-eth0" Jan 23 23:56:09.615167 containerd[2023]: 2026-01-23 23:56:09.559 [INFO][5572] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3218237363b ContainerID="d6664e073b32a5a13551ce211a6149af648c7070e4e502e0b628472d820271dc" Namespace="kube-system" Pod="coredns-66bc5c9577-jl7gt" WorkloadEndpoint="ip--172--31--18--95-k8s-coredns--66bc5c9577--jl7gt-eth0" Jan 23 23:56:09.615167 containerd[2023]: 2026-01-23 23:56:09.573 [INFO][5572] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d6664e073b32a5a13551ce211a6149af648c7070e4e502e0b628472d820271dc" Namespace="kube-system" Pod="coredns-66bc5c9577-jl7gt" WorkloadEndpoint="ip--172--31--18--95-k8s-coredns--66bc5c9577--jl7gt-eth0" Jan 23 23:56:09.615167 containerd[2023]: 2026-01-23 23:56:09.574 [INFO][5572] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d6664e073b32a5a13551ce211a6149af648c7070e4e502e0b628472d820271dc" Namespace="kube-system" Pod="coredns-66bc5c9577-jl7gt" WorkloadEndpoint="ip--172--31--18--95-k8s-coredns--66bc5c9577--jl7gt-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--95-k8s-coredns--66bc5c9577--jl7gt-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"75d141a9-546a-4b46-adcf-a6cd7a6e3073", ResourceVersion:"1100", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-95", ContainerID:"d6664e073b32a5a13551ce211a6149af648c7070e4e502e0b628472d820271dc", Pod:"coredns-66bc5c9577-jl7gt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3218237363b", MAC:"1e:05:87:f4:72:a7", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:09.615167 containerd[2023]: 2026-01-23 23:56:09.603 [INFO][5572] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d6664e073b32a5a13551ce211a6149af648c7070e4e502e0b628472d820271dc" Namespace="kube-system" Pod="coredns-66bc5c9577-jl7gt" WorkloadEndpoint="ip--172--31--18--95-k8s-coredns--66bc5c9577--jl7gt-eth0" Jan 23 23:56:09.674940 containerd[2023]: time="2026-01-23T23:56:09.673445105Z" level=info msg="loading plugin \"io.containerd.event.v1.publisher\"..." runtime=io.containerd.runc.v2 type=io.containerd.event.v1 Jan 23 23:56:09.674940 containerd[2023]: time="2026-01-23T23:56:09.673565825Z" level=info msg="loading plugin \"io.containerd.internal.v1.shutdown\"..." runtime=io.containerd.runc.v2 type=io.containerd.internal.v1 Jan 23 23:56:09.674940 containerd[2023]: time="2026-01-23T23:56:09.673603841Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.task\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:09.674940 containerd[2023]: time="2026-01-23T23:56:09.673776233Z" level=info msg="loading plugin \"io.containerd.ttrpc.v1.pause\"..." runtime=io.containerd.runc.v2 type=io.containerd.ttrpc.v1 Jan 23 23:56:09.691560 containerd[2023]: time="2026-01-23T23:56:09.691460345Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:09.693028 containerd[2023]: 2026-01-23 23:56:09.543 [WARNING][5674] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--95-k8s-csi--node--driver--bsxtr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-95", ContainerID:"1ef7739a84dc3b864387d0ea1ec8ec9333074d69bbc4a4f25c154d9f8651c8b9", Pod:"csi-node-driver-bsxtr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califad62ff3841", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:09.693028 containerd[2023]: 2026-01-23 23:56:09.544 [INFO][5674] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" Jan 23 23:56:09.693028 containerd[2023]: 2026-01-23 23:56:09.544 [INFO][5674] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" iface="eth0" netns="" Jan 23 23:56:09.693028 containerd[2023]: 2026-01-23 23:56:09.544 [INFO][5674] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" Jan 23 23:56:09.693028 containerd[2023]: 2026-01-23 23:56:09.544 [INFO][5674] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" Jan 23 23:56:09.693028 containerd[2023]: 2026-01-23 23:56:09.629 [INFO][5690] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" HandleID="k8s-pod-network.13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" Workload="ip--172--31--18--95-k8s-csi--node--driver--bsxtr-eth0" Jan 23 23:56:09.693028 containerd[2023]: 2026-01-23 23:56:09.629 [INFO][5690] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:09.693028 containerd[2023]: 2026-01-23 23:56:09.629 [INFO][5690] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:09.693028 containerd[2023]: 2026-01-23 23:56:09.658 [WARNING][5690] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" HandleID="k8s-pod-network.13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" Workload="ip--172--31--18--95-k8s-csi--node--driver--bsxtr-eth0" Jan 23 23:56:09.693028 containerd[2023]: 2026-01-23 23:56:09.658 [INFO][5690] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" HandleID="k8s-pod-network.13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" Workload="ip--172--31--18--95-k8s-csi--node--driver--bsxtr-eth0" Jan 23 23:56:09.693028 containerd[2023]: 2026-01-23 23:56:09.664 [INFO][5690] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:09.693028 containerd[2023]: 2026-01-23 23:56:09.675 [INFO][5674] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" Jan 23 23:56:09.697733 containerd[2023]: time="2026-01-23T23:56:09.693065525Z" level=info msg="TearDown network for sandbox \"13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059\" successfully" Jan 23 23:56:09.697733 containerd[2023]: time="2026-01-23T23:56:09.693098693Z" level=info msg="StopPodSandbox for \"13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059\" returns successfully" Jan 23 23:56:09.697733 containerd[2023]: time="2026-01-23T23:56:09.696181865Z" level=info msg="RemovePodSandbox for \"13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059\"" Jan 23 23:56:09.697733 containerd[2023]: time="2026-01-23T23:56:09.696234305Z" level=info msg="Forcibly stopping sandbox \"13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059\"" Jan 23 23:56:09.700976 containerd[2023]: time="2026-01-23T23:56:09.700501241Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:56:09.700976 containerd[2023]: time="2026-01-23T23:56:09.700689269Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:56:09.701166 kubelet[3333]: E0123 23:56:09.700807 3333 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:56:09.701166 kubelet[3333]: E0123 23:56:09.700863 3333 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:56:09.702444 kubelet[3333]: E0123 23:56:09.701721 3333 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-58854c8f84-79vx7_calico-apiserver(3ac30c2d-dd8c-4060-a356-77e0062bc1c4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:09.702444 kubelet[3333]: E0123 23:56:09.701814 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58854c8f84-79vx7" podUID="3ac30c2d-dd8c-4060-a356-77e0062bc1c4" Jan 23 23:56:09.705261 containerd[2023]: time="2026-01-23T23:56:09.702775337Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:56:09.726154 systemd-networkd[1935]: caliaba3d10cdf0: Gained IPv6LL Jan 23 23:56:09.770257 systemd[1]: Started cri-containerd-d6664e073b32a5a13551ce211a6149af648c7070e4e502e0b628472d820271dc.scope - libcontainer container d6664e073b32a5a13551ce211a6149af648c7070e4e502e0b628472d820271dc. Jan 23 23:56:09.880711 containerd[2023]: time="2026-01-23T23:56:09.880470690Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-jl7gt,Uid:75d141a9-546a-4b46-adcf-a6cd7a6e3073,Namespace:kube-system,Attempt:1,} returns sandbox id \"d6664e073b32a5a13551ce211a6149af648c7070e4e502e0b628472d820271dc\"" Jan 23 23:56:09.903024 containerd[2023]: time="2026-01-23T23:56:09.902788818Z" level=info msg="CreateContainer within sandbox \"d6664e073b32a5a13551ce211a6149af648c7070e4e502e0b628472d820271dc\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 23:56:09.957601 containerd[2023]: 2026-01-23 23:56:09.840 [WARNING][5739] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--95-k8s-csi--node--driver--bsxtr-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0", ResourceVersion:"1108", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-95", ContainerID:"1ef7739a84dc3b864387d0ea1ec8ec9333074d69bbc4a4f25c154d9f8651c8b9", Pod:"csi-node-driver-bsxtr", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.125.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"califad62ff3841", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:09.957601 containerd[2023]: 2026-01-23 23:56:09.840 [INFO][5739] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" Jan 23 23:56:09.957601 containerd[2023]: 2026-01-23 23:56:09.840 [INFO][5739] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" iface="eth0" netns="" Jan 23 23:56:09.957601 containerd[2023]: 2026-01-23 23:56:09.840 [INFO][5739] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" Jan 23 23:56:09.957601 containerd[2023]: 2026-01-23 23:56:09.840 [INFO][5739] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" Jan 23 23:56:09.957601 containerd[2023]: 2026-01-23 23:56:09.907 [INFO][5754] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" HandleID="k8s-pod-network.13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" Workload="ip--172--31--18--95-k8s-csi--node--driver--bsxtr-eth0" Jan 23 23:56:09.957601 containerd[2023]: 2026-01-23 23:56:09.907 [INFO][5754] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:09.957601 containerd[2023]: 2026-01-23 23:56:09.907 [INFO][5754] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:09.957601 containerd[2023]: 2026-01-23 23:56:09.924 [WARNING][5754] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" HandleID="k8s-pod-network.13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" Workload="ip--172--31--18--95-k8s-csi--node--driver--bsxtr-eth0" Jan 23 23:56:09.957601 containerd[2023]: 2026-01-23 23:56:09.924 [INFO][5754] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" HandleID="k8s-pod-network.13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" Workload="ip--172--31--18--95-k8s-csi--node--driver--bsxtr-eth0" Jan 23 23:56:09.957601 containerd[2023]: 2026-01-23 23:56:09.929 [INFO][5754] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:09.957601 containerd[2023]: 2026-01-23 23:56:09.941 [INFO][5739] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059" Jan 23 23:56:09.959931 containerd[2023]: time="2026-01-23T23:56:09.958589863Z" level=info msg="TearDown network for sandbox \"13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059\" successfully" Jan 23 23:56:09.959749 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount959107280.mount: Deactivated successfully. Jan 23 23:56:09.964230 containerd[2023]: time="2026-01-23T23:56:09.964120915Z" level=info msg="CreateContainer within sandbox \"d6664e073b32a5a13551ce211a6149af648c7070e4e502e0b628472d820271dc\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3db01c62b40116a499bf9e52f4dec45b4876523ed8c05a0506624594eb909db8\"" Jan 23 23:56:09.965326 containerd[2023]: time="2026-01-23T23:56:09.965277967Z" level=info msg="StartContainer for \"3db01c62b40116a499bf9e52f4dec45b4876523ed8c05a0506624594eb909db8\"" Jan 23 23:56:09.971499 containerd[2023]: time="2026-01-23T23:56:09.971412955Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:56:09.971856 containerd[2023]: time="2026-01-23T23:56:09.971813839Z" level=info msg="RemovePodSandbox \"13bfac0d9adb6eee5da9cc250f1e5748550c17756c29e3cfd9e767262170c059\" returns successfully" Jan 23 23:56:09.972924 containerd[2023]: time="2026-01-23T23:56:09.972771247Z" level=info msg="StopPodSandbox for \"49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382\"" Jan 23 23:56:10.027550 containerd[2023]: time="2026-01-23T23:56:10.027359403Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:10.031767 containerd[2023]: time="2026-01-23T23:56:10.029851839Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:56:10.031767 containerd[2023]: time="2026-01-23T23:56:10.030300027Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:56:10.033097 kubelet[3333]: E0123 23:56:10.030974 3333 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:56:10.033097 kubelet[3333]: E0123 23:56:10.031035 3333 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:56:10.033097 kubelet[3333]: E0123 23:56:10.031264 3333 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-58854c8f84-dfgrz_calico-apiserver(88895574-7d47-4441-9e70-eebbca18d915): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:10.033097 kubelet[3333]: E0123 23:56:10.031324 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58854c8f84-dfgrz" podUID="88895574-7d47-4441-9e70-eebbca18d915" Jan 23 23:56:10.030348 systemd[1]: Started cri-containerd-3db01c62b40116a499bf9e52f4dec45b4876523ed8c05a0506624594eb909db8.scope - libcontainer container 3db01c62b40116a499bf9e52f4dec45b4876523ed8c05a0506624594eb909db8. Jan 23 23:56:10.033551 containerd[2023]: time="2026-01-23T23:56:10.033374103Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 23:56:10.108022 kubelet[3333]: E0123 23:56:10.106966 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58854c8f84-79vx7" podUID="3ac30c2d-dd8c-4060-a356-77e0062bc1c4" Jan 23 23:56:10.127385 kubelet[3333]: E0123 23:56:10.127233 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58854c8f84-dfgrz" podUID="88895574-7d47-4441-9e70-eebbca18d915" Jan 23 23:56:10.167296 containerd[2023]: time="2026-01-23T23:56:10.166846036Z" level=info msg="StartContainer for \"3db01c62b40116a499bf9e52f4dec45b4876523ed8c05a0506624594eb909db8\" returns successfully" Jan 23 23:56:10.242179 containerd[2023]: 2026-01-23 23:56:10.093 [WARNING][5780] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--95-k8s-calico--kube--controllers--54c598b4dd--zn6ss-eth0", GenerateName:"calico-kube-controllers-54c598b4dd-", Namespace:"calico-system", SelfLink:"", UID:"32710301-53d1-443d-ade3-ac9179beb56f", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54c598b4dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-95", ContainerID:"da17cc41ab5965c5fe1fda79813406f5a06c5c184a3152af2d03ed347fafcb28", Pod:"calico-kube-controllers-54c598b4dd-zn6ss", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.125.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7688df54f08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:10.242179 containerd[2023]: 2026-01-23 23:56:10.095 [INFO][5780] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" Jan 23 23:56:10.242179 containerd[2023]: 2026-01-23 23:56:10.095 [INFO][5780] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" iface="eth0" netns="" Jan 23 23:56:10.242179 containerd[2023]: 2026-01-23 23:56:10.095 [INFO][5780] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" Jan 23 23:56:10.242179 containerd[2023]: 2026-01-23 23:56:10.095 [INFO][5780] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" Jan 23 23:56:10.242179 containerd[2023]: 2026-01-23 23:56:10.189 [INFO][5810] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" HandleID="k8s-pod-network.49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" Workload="ip--172--31--18--95-k8s-calico--kube--controllers--54c598b4dd--zn6ss-eth0" Jan 23 23:56:10.242179 containerd[2023]: 2026-01-23 23:56:10.189 [INFO][5810] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:10.242179 containerd[2023]: 2026-01-23 23:56:10.189 [INFO][5810] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:10.242179 containerd[2023]: 2026-01-23 23:56:10.222 [WARNING][5810] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" HandleID="k8s-pod-network.49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" Workload="ip--172--31--18--95-k8s-calico--kube--controllers--54c598b4dd--zn6ss-eth0" Jan 23 23:56:10.242179 containerd[2023]: 2026-01-23 23:56:10.222 [INFO][5810] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" HandleID="k8s-pod-network.49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" Workload="ip--172--31--18--95-k8s-calico--kube--controllers--54c598b4dd--zn6ss-eth0" Jan 23 23:56:10.242179 containerd[2023]: 2026-01-23 23:56:10.234 [INFO][5810] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:10.242179 containerd[2023]: 2026-01-23 23:56:10.238 [INFO][5780] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" Jan 23 23:56:10.243658 containerd[2023]: time="2026-01-23T23:56:10.242236396Z" level=info msg="TearDown network for sandbox \"49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382\" successfully" Jan 23 23:56:10.243658 containerd[2023]: time="2026-01-23T23:56:10.242274052Z" level=info msg="StopPodSandbox for \"49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382\" returns successfully" Jan 23 23:56:10.243658 containerd[2023]: time="2026-01-23T23:56:10.243332728Z" level=info msg="RemovePodSandbox for \"49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382\"" Jan 23 23:56:10.243658 containerd[2023]: time="2026-01-23T23:56:10.243382288Z" level=info msg="Forcibly stopping sandbox \"49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382\"" Jan 23 23:56:10.301023 systemd-networkd[1935]: cali05baba26f89: Gained IPv6LL Jan 23 23:56:10.307509 kubelet[3333]: E0123 23:56:10.305429 3333 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:56:10.307509 kubelet[3333]: E0123 23:56:10.305486 3333 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:56:10.307509 kubelet[3333]: E0123 23:56:10.305591 3333 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-zhdzf_calico-system(b627f7db-d96f-4cdc-9084-8b79e8e215fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:10.307509 kubelet[3333]: E0123 23:56:10.305642 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zhdzf" podUID="b627f7db-d96f-4cdc-9084-8b79e8e215fb" Jan 23 23:56:10.308153 containerd[2023]: time="2026-01-23T23:56:10.302160496Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:10.308153 containerd[2023]: time="2026-01-23T23:56:10.305013916Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 23:56:10.308153 containerd[2023]: time="2026-01-23T23:56:10.305227864Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 23:56:10.443989 containerd[2023]: 2026-01-23 23:56:10.358 [WARNING][5835] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--95-k8s-calico--kube--controllers--54c598b4dd--zn6ss-eth0", GenerateName:"calico-kube-controllers-54c598b4dd-", Namespace:"calico-system", SelfLink:"", UID:"32710301-53d1-443d-ade3-ac9179beb56f", ResourceVersion:"1088", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 40, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"54c598b4dd", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-95", ContainerID:"da17cc41ab5965c5fe1fda79813406f5a06c5c184a3152af2d03ed347fafcb28", Pod:"calico-kube-controllers-54c598b4dd-zn6ss", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.125.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali7688df54f08", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:56:10.443989 containerd[2023]: 2026-01-23 23:56:10.359 [INFO][5835] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" Jan 23 23:56:10.443989 containerd[2023]: 2026-01-23 23:56:10.359 [INFO][5835] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" iface="eth0" netns="" Jan 23 23:56:10.443989 containerd[2023]: 2026-01-23 23:56:10.359 [INFO][5835] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" Jan 23 23:56:10.443989 containerd[2023]: 2026-01-23 23:56:10.359 [INFO][5835] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" Jan 23 23:56:10.443989 containerd[2023]: 2026-01-23 23:56:10.413 [INFO][5845] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" HandleID="k8s-pod-network.49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" Workload="ip--172--31--18--95-k8s-calico--kube--controllers--54c598b4dd--zn6ss-eth0" Jan 23 23:56:10.443989 containerd[2023]: 2026-01-23 23:56:10.414 [INFO][5845] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:56:10.443989 containerd[2023]: 2026-01-23 23:56:10.414 [INFO][5845] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:56:10.443989 containerd[2023]: 2026-01-23 23:56:10.430 [WARNING][5845] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" HandleID="k8s-pod-network.49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" Workload="ip--172--31--18--95-k8s-calico--kube--controllers--54c598b4dd--zn6ss-eth0" Jan 23 23:56:10.443989 containerd[2023]: 2026-01-23 23:56:10.430 [INFO][5845] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" HandleID="k8s-pod-network.49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" Workload="ip--172--31--18--95-k8s-calico--kube--controllers--54c598b4dd--zn6ss-eth0" Jan 23 23:56:10.443989 containerd[2023]: 2026-01-23 23:56:10.433 [INFO][5845] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:56:10.443989 containerd[2023]: 2026-01-23 23:56:10.438 [INFO][5835] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382" Jan 23 23:56:10.443989 containerd[2023]: time="2026-01-23T23:56:10.443084597Z" level=info msg="TearDown network for sandbox \"49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382\" successfully" Jan 23 23:56:10.462130 containerd[2023]: time="2026-01-23T23:56:10.461851121Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:56:10.462130 containerd[2023]: time="2026-01-23T23:56:10.461972945Z" level=info msg="RemovePodSandbox \"49bd311a80510e0c42ff7867257b7c46aad0586ca49e915e8b3b4ca99a02d382\" returns successfully" Jan 23 23:56:11.005637 systemd-networkd[1935]: cali3218237363b: Gained IPv6LL Jan 23 23:56:11.170488 kubelet[3333]: E0123 23:56:11.170240 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58854c8f84-79vx7" podUID="3ac30c2d-dd8c-4060-a356-77e0062bc1c4" Jan 23 23:56:11.170488 kubelet[3333]: E0123 23:56:11.170331 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58854c8f84-dfgrz" podUID="88895574-7d47-4441-9e70-eebbca18d915" Jan 23 23:56:11.172302 kubelet[3333]: E0123 23:56:11.170800 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zhdzf" podUID="b627f7db-d96f-4cdc-9084-8b79e8e215fb" Jan 23 23:56:11.247137 kubelet[3333]: I0123 23:56:11.246021 3333 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-jl7gt" podStartSLOduration=58.245997653 podStartE2EDuration="58.245997653s" podCreationTimestamp="2026-01-23 23:55:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 23:56:11.216386945 +0000 UTC m=+64.957031340" watchObservedRunningTime="2026-01-23 23:56:11.245997653 +0000 UTC m=+64.986642024" Jan 23 23:56:13.844080 ntpd[1992]: Listen normally on 7 vxlan.calico 192.168.125.128:123 Jan 23 23:56:13.845441 ntpd[1992]: 23 Jan 23:56:13 ntpd[1992]: Listen normally on 7 vxlan.calico 192.168.125.128:123 Jan 23 23:56:13.845441 ntpd[1992]: 23 Jan 23:56:13 ntpd[1992]: Listen normally on 8 califc277d7c3be [fe80::ecee:eeff:feee:eeee%4]:123 Jan 23 23:56:13.845441 ntpd[1992]: 23 Jan 23:56:13 ntpd[1992]: Listen normally on 9 cali2da03103d86 [fe80::ecee:eeff:feee:eeee%5]:123 Jan 23 23:56:13.845441 ntpd[1992]: 23 Jan 23:56:13 ntpd[1992]: Listen normally on 10 cali7688df54f08 [fe80::ecee:eeff:feee:eeee%6]:123 Jan 23 23:56:13.845441 ntpd[1992]: 23 Jan 23:56:13 ntpd[1992]: Listen normally on 11 vxlan.calico [fe80::64e9:d3ff:fe73:d1df%7]:123 Jan 23 23:56:13.845441 ntpd[1992]: 23 Jan 23:56:13 ntpd[1992]: Listen normally on 12 califad62ff3841 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 23 23:56:13.845441 ntpd[1992]: 23 Jan 23:56:13 ntpd[1992]: Listen normally on 13 cali151d7460f0f [fe80::ecee:eeff:feee:eeee%11]:123 Jan 23 23:56:13.845441 ntpd[1992]: 23 Jan 23:56:13 ntpd[1992]: Listen normally on 14 caliaba3d10cdf0 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 23 23:56:13.845441 ntpd[1992]: 23 Jan 23:56:13 ntpd[1992]: Listen normally on 15 cali05baba26f89 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 23 23:56:13.845441 ntpd[1992]: 23 Jan 23:56:13 ntpd[1992]: Listen normally on 16 cali3218237363b [fe80::ecee:eeff:feee:eeee%14]:123 Jan 23 23:56:13.844206 ntpd[1992]: Listen normally on 8 califc277d7c3be [fe80::ecee:eeff:feee:eeee%4]:123 Jan 23 23:56:13.844285 ntpd[1992]: Listen normally on 9 cali2da03103d86 [fe80::ecee:eeff:feee:eeee%5]:123 Jan 23 23:56:13.844354 ntpd[1992]: Listen normally on 10 cali7688df54f08 [fe80::ecee:eeff:feee:eeee%6]:123 Jan 23 23:56:13.844423 ntpd[1992]: Listen normally on 11 vxlan.calico [fe80::64e9:d3ff:fe73:d1df%7]:123 Jan 23 23:56:13.844489 ntpd[1992]: Listen normally on 12 califad62ff3841 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 23 23:56:13.844555 ntpd[1992]: Listen normally on 13 cali151d7460f0f [fe80::ecee:eeff:feee:eeee%11]:123 Jan 23 23:56:13.844620 ntpd[1992]: Listen normally on 14 caliaba3d10cdf0 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 23 23:56:13.844688 ntpd[1992]: Listen normally on 15 cali05baba26f89 [fe80::ecee:eeff:feee:eeee%13]:123 Jan 23 23:56:13.844759 ntpd[1992]: Listen normally on 16 cali3218237363b [fe80::ecee:eeff:feee:eeee%14]:123 Jan 23 23:56:13.933699 systemd[1]: Started sshd@9-172.31.18.95:22-4.153.228.146:60668.service - OpenSSH per-connection server daemon (4.153.228.146:60668). Jan 23 23:56:14.455600 sshd[5866]: Accepted publickey for core from 4.153.228.146 port 60668 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:14.459093 sshd[5866]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:14.467985 systemd-logind[1997]: New session 10 of user core. Jan 23 23:56:14.476190 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 23:56:14.935281 sshd[5866]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:14.942284 systemd[1]: sshd@9-172.31.18.95:22-4.153.228.146:60668.service: Deactivated successfully. Jan 23 23:56:14.946176 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 23:56:14.947382 systemd-logind[1997]: Session 10 logged out. Waiting for processes to exit. Jan 23 23:56:14.951463 systemd-logind[1997]: Removed session 10. Jan 23 23:56:15.047430 systemd[1]: Started sshd@10-172.31.18.95:22-4.153.228.146:49624.service - OpenSSH per-connection server daemon (4.153.228.146:49624). Jan 23 23:56:15.579242 sshd[5882]: Accepted publickey for core from 4.153.228.146 port 49624 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:15.582538 sshd[5882]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:15.589932 systemd-logind[1997]: New session 11 of user core. Jan 23 23:56:15.600214 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 23:56:16.154114 sshd[5882]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:16.159859 systemd[1]: sshd@10-172.31.18.95:22-4.153.228.146:49624.service: Deactivated successfully. Jan 23 23:56:16.163995 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 23:56:16.167101 systemd-logind[1997]: Session 11 logged out. Waiting for processes to exit. Jan 23 23:56:16.170683 systemd-logind[1997]: Removed session 11. Jan 23 23:56:16.242443 systemd[1]: Started sshd@11-172.31.18.95:22-4.153.228.146:49632.service - OpenSSH per-connection server daemon (4.153.228.146:49632). Jan 23 23:56:16.741448 sshd[5895]: Accepted publickey for core from 4.153.228.146 port 49632 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:16.744561 sshd[5895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:16.754105 systemd-logind[1997]: New session 12 of user core. Jan 23 23:56:16.759236 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 23:56:17.212805 sshd[5895]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:17.220222 systemd[1]: sshd@11-172.31.18.95:22-4.153.228.146:49632.service: Deactivated successfully. Jan 23 23:56:17.226408 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 23:56:17.228274 systemd-logind[1997]: Session 12 logged out. Waiting for processes to exit. Jan 23 23:56:17.230775 systemd-logind[1997]: Removed session 12. Jan 23 23:56:19.528074 containerd[2023]: time="2026-01-23T23:56:19.527737742Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 23:56:19.781623 containerd[2023]: time="2026-01-23T23:56:19.781281135Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:19.783624 containerd[2023]: time="2026-01-23T23:56:19.783427491Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 23:56:19.783624 containerd[2023]: time="2026-01-23T23:56:19.783570567Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 23:56:19.784262 kubelet[3333]: E0123 23:56:19.783986 3333 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:56:19.784262 kubelet[3333]: E0123 23:56:19.784050 3333 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:56:19.784836 kubelet[3333]: E0123 23:56:19.784301 3333 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-54c598b4dd-zn6ss_calico-system(32710301-53d1-443d-ade3-ac9179beb56f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:19.784836 kubelet[3333]: E0123 23:56:19.784362 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54c598b4dd-zn6ss" podUID="32710301-53d1-443d-ade3-ac9179beb56f" Jan 23 23:56:19.785371 containerd[2023]: time="2026-01-23T23:56:19.785319639Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 23:56:20.066454 containerd[2023]: time="2026-01-23T23:56:20.066288265Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:20.068594 containerd[2023]: time="2026-01-23T23:56:20.068444449Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 23:56:20.068594 containerd[2023]: time="2026-01-23T23:56:20.068553085Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 23:56:20.069035 kubelet[3333]: E0123 23:56:20.068966 3333 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:56:20.069175 kubelet[3333]: E0123 23:56:20.069037 3333 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:56:20.069712 kubelet[3333]: E0123 23:56:20.069184 3333 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5757ddb5fd-l52x5_calico-system(1e83b674-bf5a-4da7-960a-435a24e8e6d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:20.072389 containerd[2023]: time="2026-01-23T23:56:20.072069853Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 23:56:20.355446 containerd[2023]: time="2026-01-23T23:56:20.355126586Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:20.357970 containerd[2023]: time="2026-01-23T23:56:20.357703394Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 23:56:20.357970 containerd[2023]: time="2026-01-23T23:56:20.357865670Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 23:56:20.358643 kubelet[3333]: E0123 23:56:20.358401 3333 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:56:20.358643 kubelet[3333]: E0123 23:56:20.358493 3333 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:56:20.359060 kubelet[3333]: E0123 23:56:20.358836 3333 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5757ddb5fd-l52x5_calico-system(1e83b674-bf5a-4da7-960a-435a24e8e6d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:20.359978 kubelet[3333]: E0123 23:56:20.358955 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5757ddb5fd-l52x5" podUID="1e83b674-bf5a-4da7-960a-435a24e8e6d1" Jan 23 23:56:21.524593 containerd[2023]: time="2026-01-23T23:56:21.524233828Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:56:21.783583 containerd[2023]: time="2026-01-23T23:56:21.783390833Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:21.786879 containerd[2023]: time="2026-01-23T23:56:21.785710913Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:56:21.786879 containerd[2023]: time="2026-01-23T23:56:21.785810177Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:56:21.787097 kubelet[3333]: E0123 23:56:21.786027 3333 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:56:21.787097 kubelet[3333]: E0123 23:56:21.786084 3333 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:56:21.787097 kubelet[3333]: E0123 23:56:21.786195 3333 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-bsxtr_calico-system(16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:21.790234 containerd[2023]: time="2026-01-23T23:56:21.790175525Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:56:22.182008 containerd[2023]: time="2026-01-23T23:56:22.181657671Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:22.184860 containerd[2023]: time="2026-01-23T23:56:22.184629819Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:56:22.184860 containerd[2023]: time="2026-01-23T23:56:22.184732035Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:56:22.186016 kubelet[3333]: E0123 23:56:22.185038 3333 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:56:22.186016 kubelet[3333]: E0123 23:56:22.185103 3333 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:56:22.186016 kubelet[3333]: E0123 23:56:22.185207 3333 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-bsxtr_calico-system(16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:22.186420 kubelet[3333]: E0123 23:56:22.185274 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bsxtr" podUID="16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0" Jan 23 23:56:22.307429 systemd[1]: Started sshd@12-172.31.18.95:22-4.153.228.146:49638.service - OpenSSH per-connection server daemon (4.153.228.146:49638). Jan 23 23:56:22.525432 containerd[2023]: time="2026-01-23T23:56:22.525255737Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 23:56:22.792032 containerd[2023]: time="2026-01-23T23:56:22.791657466Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:22.794103 containerd[2023]: time="2026-01-23T23:56:22.794026974Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 23:56:22.794210 containerd[2023]: time="2026-01-23T23:56:22.794181738Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 23:56:22.794573 kubelet[3333]: E0123 23:56:22.794489 3333 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:56:22.795157 kubelet[3333]: E0123 23:56:22.794570 3333 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:56:22.795157 kubelet[3333]: E0123 23:56:22.794673 3333 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-zhdzf_calico-system(b627f7db-d96f-4cdc-9084-8b79e8e215fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:22.795157 kubelet[3333]: E0123 23:56:22.794723 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zhdzf" podUID="b627f7db-d96f-4cdc-9084-8b79e8e215fb" Jan 23 23:56:22.813386 sshd[5921]: Accepted publickey for core from 4.153.228.146 port 49638 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:22.816283 sshd[5921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:22.825444 systemd-logind[1997]: New session 13 of user core. Jan 23 23:56:22.830192 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 23:56:23.297365 sshd[5921]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:23.304741 systemd[1]: sshd@12-172.31.18.95:22-4.153.228.146:49638.service: Deactivated successfully. Jan 23 23:56:23.309561 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 23:56:23.311703 systemd-logind[1997]: Session 13 logged out. Waiting for processes to exit. Jan 23 23:56:23.314115 systemd-logind[1997]: Removed session 13. Jan 23 23:56:23.524602 containerd[2023]: time="2026-01-23T23:56:23.524543646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:56:23.793576 containerd[2023]: time="2026-01-23T23:56:23.793495531Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:23.795852 containerd[2023]: time="2026-01-23T23:56:23.795713503Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:56:23.795852 containerd[2023]: time="2026-01-23T23:56:23.795798223Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:56:23.796127 kubelet[3333]: E0123 23:56:23.796061 3333 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:56:23.796632 kubelet[3333]: E0123 23:56:23.796132 3333 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:56:23.796632 kubelet[3333]: E0123 23:56:23.796242 3333 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-58854c8f84-dfgrz_calico-apiserver(88895574-7d47-4441-9e70-eebbca18d915): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:23.796632 kubelet[3333]: E0123 23:56:23.796296 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58854c8f84-dfgrz" podUID="88895574-7d47-4441-9e70-eebbca18d915" Jan 23 23:56:26.526603 containerd[2023]: time="2026-01-23T23:56:26.526186905Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:56:26.803268 containerd[2023]: time="2026-01-23T23:56:26.803042338Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:26.805272 containerd[2023]: time="2026-01-23T23:56:26.805158538Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:56:26.805483 containerd[2023]: time="2026-01-23T23:56:26.805293034Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:56:26.805646 kubelet[3333]: E0123 23:56:26.805588 3333 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:56:26.806200 kubelet[3333]: E0123 23:56:26.805659 3333 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:56:26.806200 kubelet[3333]: E0123 23:56:26.805766 3333 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-58854c8f84-79vx7_calico-apiserver(3ac30c2d-dd8c-4060-a356-77e0062bc1c4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:26.806200 kubelet[3333]: E0123 23:56:26.805821 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58854c8f84-79vx7" podUID="3ac30c2d-dd8c-4060-a356-77e0062bc1c4" Jan 23 23:56:28.395417 systemd[1]: Started sshd@13-172.31.18.95:22-4.153.228.146:47988.service - OpenSSH per-connection server daemon (4.153.228.146:47988). Jan 23 23:56:28.898948 sshd[5940]: Accepted publickey for core from 4.153.228.146 port 47988 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:28.900305 sshd[5940]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:28.910136 systemd-logind[1997]: New session 14 of user core. Jan 23 23:56:28.915260 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 23:56:29.376710 sshd[5940]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:29.385210 systemd[1]: sshd@13-172.31.18.95:22-4.153.228.146:47988.service: Deactivated successfully. Jan 23 23:56:29.390826 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 23:56:29.392192 systemd-logind[1997]: Session 14 logged out. Waiting for processes to exit. Jan 23 23:56:29.394526 systemd-logind[1997]: Removed session 14. Jan 23 23:56:31.525098 kubelet[3333]: E0123 23:56:31.524914 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54c598b4dd-zn6ss" podUID="32710301-53d1-443d-ade3-ac9179beb56f" Jan 23 23:56:33.527883 kubelet[3333]: E0123 23:56:33.527750 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5757ddb5fd-l52x5" podUID="1e83b674-bf5a-4da7-960a-435a24e8e6d1" Jan 23 23:56:34.489532 systemd[1]: Started sshd@14-172.31.18.95:22-4.153.228.146:47990.service - OpenSSH per-connection server daemon (4.153.228.146:47990). Jan 23 23:56:34.532571 kubelet[3333]: E0123 23:56:34.532507 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zhdzf" podUID="b627f7db-d96f-4cdc-9084-8b79e8e215fb" Jan 23 23:56:35.035708 sshd[5979]: Accepted publickey for core from 4.153.228.146 port 47990 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:35.038546 sshd[5979]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:35.047132 systemd-logind[1997]: New session 15 of user core. Jan 23 23:56:35.054173 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 23:56:35.524266 kubelet[3333]: E0123 23:56:35.524178 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58854c8f84-dfgrz" podUID="88895574-7d47-4441-9e70-eebbca18d915" Jan 23 23:56:35.543251 sshd[5979]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:35.554822 systemd[1]: sshd@14-172.31.18.95:22-4.153.228.146:47990.service: Deactivated successfully. Jan 23 23:56:35.558844 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 23:56:35.561155 systemd-logind[1997]: Session 15 logged out. Waiting for processes to exit. Jan 23 23:56:35.563264 systemd-logind[1997]: Removed session 15. Jan 23 23:56:37.528443 kubelet[3333]: E0123 23:56:37.528307 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bsxtr" podUID="16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0" Jan 23 23:56:40.650936 systemd[1]: Started sshd@15-172.31.18.95:22-4.153.228.146:52714.service - OpenSSH per-connection server daemon (4.153.228.146:52714). Jan 23 23:56:41.208287 sshd[5996]: Accepted publickey for core from 4.153.228.146 port 52714 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:41.211206 sshd[5996]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:41.220981 systemd-logind[1997]: New session 16 of user core. Jan 23 23:56:41.230272 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 23:56:41.526600 kubelet[3333]: E0123 23:56:41.525414 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58854c8f84-79vx7" podUID="3ac30c2d-dd8c-4060-a356-77e0062bc1c4" Jan 23 23:56:41.758263 sshd[5996]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:41.766600 systemd[1]: sshd@15-172.31.18.95:22-4.153.228.146:52714.service: Deactivated successfully. Jan 23 23:56:41.771957 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 23:56:41.777702 systemd-logind[1997]: Session 16 logged out. Waiting for processes to exit. Jan 23 23:56:41.782124 systemd-logind[1997]: Removed session 16. Jan 23 23:56:41.849492 systemd[1]: Started sshd@16-172.31.18.95:22-4.153.228.146:52724.service - OpenSSH per-connection server daemon (4.153.228.146:52724). Jan 23 23:56:42.355127 sshd[6009]: Accepted publickey for core from 4.153.228.146 port 52724 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:42.358684 sshd[6009]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:42.369406 systemd-logind[1997]: New session 17 of user core. Jan 23 23:56:42.375531 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 23:56:43.161469 sshd[6009]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:43.171204 systemd[1]: sshd@16-172.31.18.95:22-4.153.228.146:52724.service: Deactivated successfully. Jan 23 23:56:43.179403 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 23:56:43.183797 systemd-logind[1997]: Session 17 logged out. Waiting for processes to exit. Jan 23 23:56:43.186670 systemd-logind[1997]: Removed session 17. Jan 23 23:56:43.261118 systemd[1]: Started sshd@17-172.31.18.95:22-4.153.228.146:52726.service - OpenSSH per-connection server daemon (4.153.228.146:52726). Jan 23 23:56:43.774969 sshd[6020]: Accepted publickey for core from 4.153.228.146 port 52726 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:43.777734 sshd[6020]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:43.787370 systemd-logind[1997]: New session 18 of user core. Jan 23 23:56:43.796160 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 23:56:44.528211 containerd[2023]: time="2026-01-23T23:56:44.528146270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 23:56:44.796813 containerd[2023]: time="2026-01-23T23:56:44.796453588Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:44.799019 containerd[2023]: time="2026-01-23T23:56:44.798944092Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 23:56:44.799190 containerd[2023]: time="2026-01-23T23:56:44.799095196Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 23:56:44.799619 kubelet[3333]: E0123 23:56:44.799559 3333 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:56:44.800422 kubelet[3333]: E0123 23:56:44.799632 3333 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:56:44.800422 kubelet[3333]: E0123 23:56:44.799769 3333 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-54c598b4dd-zn6ss_calico-system(32710301-53d1-443d-ade3-ac9179beb56f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:44.800422 kubelet[3333]: E0123 23:56:44.799826 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54c598b4dd-zn6ss" podUID="32710301-53d1-443d-ade3-ac9179beb56f" Jan 23 23:56:45.228942 sshd[6020]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:45.237295 systemd-logind[1997]: Session 18 logged out. Waiting for processes to exit. Jan 23 23:56:45.238990 systemd[1]: sshd@17-172.31.18.95:22-4.153.228.146:52726.service: Deactivated successfully. Jan 23 23:56:45.246695 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 23:56:45.249810 systemd-logind[1997]: Removed session 18. Jan 23 23:56:45.333423 systemd[1]: Started sshd@18-172.31.18.95:22-4.153.228.146:51560.service - OpenSSH per-connection server daemon (4.153.228.146:51560). Jan 23 23:56:45.525829 containerd[2023]: time="2026-01-23T23:56:45.524719767Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 23:56:45.855057 containerd[2023]: time="2026-01-23T23:56:45.854821217Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:45.858008 containerd[2023]: time="2026-01-23T23:56:45.857938577Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 23:56:45.858153 containerd[2023]: time="2026-01-23T23:56:45.858087977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 23:56:45.858408 kubelet[3333]: E0123 23:56:45.858353 3333 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:56:45.858998 kubelet[3333]: E0123 23:56:45.858423 3333 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:56:45.858998 kubelet[3333]: E0123 23:56:45.858536 3333 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5757ddb5fd-l52x5_calico-system(1e83b674-bf5a-4da7-960a-435a24e8e6d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:45.861095 containerd[2023]: time="2026-01-23T23:56:45.861018845Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 23:56:45.885930 sshd[6039]: Accepted publickey for core from 4.153.228.146 port 51560 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:45.890280 sshd[6039]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:45.906743 systemd-logind[1997]: New session 19 of user core. Jan 23 23:56:45.915189 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 23:56:46.153815 containerd[2023]: time="2026-01-23T23:56:46.153645386Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:46.155993 containerd[2023]: time="2026-01-23T23:56:46.155812010Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 23:56:46.155993 containerd[2023]: time="2026-01-23T23:56:46.155939918Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 23:56:46.157087 kubelet[3333]: E0123 23:56:46.156333 3333 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:56:46.157087 kubelet[3333]: E0123 23:56:46.156398 3333 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:56:46.157087 kubelet[3333]: E0123 23:56:46.156505 3333 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5757ddb5fd-l52x5_calico-system(1e83b674-bf5a-4da7-960a-435a24e8e6d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:46.157428 kubelet[3333]: E0123 23:56:46.156569 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5757ddb5fd-l52x5" podUID="1e83b674-bf5a-4da7-960a-435a24e8e6d1" Jan 23 23:56:46.536986 containerd[2023]: time="2026-01-23T23:56:46.534790000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:56:46.691257 sshd[6039]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:46.701581 systemd[1]: sshd@18-172.31.18.95:22-4.153.228.146:51560.service: Deactivated successfully. Jan 23 23:56:46.707169 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 23:56:46.710007 systemd-logind[1997]: Session 19 logged out. Waiting for processes to exit. Jan 23 23:56:46.714717 systemd-logind[1997]: Removed session 19. Jan 23 23:56:46.779879 systemd[1]: Started sshd@19-172.31.18.95:22-4.153.228.146:51574.service - OpenSSH per-connection server daemon (4.153.228.146:51574). Jan 23 23:56:46.821222 containerd[2023]: time="2026-01-23T23:56:46.821143398Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:46.823193 containerd[2023]: time="2026-01-23T23:56:46.823114698Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:56:46.823294 containerd[2023]: time="2026-01-23T23:56:46.823244190Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:56:46.823838 kubelet[3333]: E0123 23:56:46.823482 3333 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:56:46.823838 kubelet[3333]: E0123 23:56:46.823565 3333 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:56:46.823838 kubelet[3333]: E0123 23:56:46.823688 3333 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-58854c8f84-dfgrz_calico-apiserver(88895574-7d47-4441-9e70-eebbca18d915): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:46.823838 kubelet[3333]: E0123 23:56:46.823744 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58854c8f84-dfgrz" podUID="88895574-7d47-4441-9e70-eebbca18d915" Jan 23 23:56:47.283731 sshd[6056]: Accepted publickey for core from 4.153.228.146 port 51574 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:47.287605 sshd[6056]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:47.302404 systemd-logind[1997]: New session 20 of user core. Jan 23 23:56:47.307159 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 23:56:47.754043 sshd[6056]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:47.761331 systemd[1]: sshd@19-172.31.18.95:22-4.153.228.146:51574.service: Deactivated successfully. Jan 23 23:56:47.765347 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 23:56:47.766932 systemd-logind[1997]: Session 20 logged out. Waiting for processes to exit. Jan 23 23:56:47.768977 systemd-logind[1997]: Removed session 20. Jan 23 23:56:48.528962 containerd[2023]: time="2026-01-23T23:56:48.528613554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:56:48.848840 containerd[2023]: time="2026-01-23T23:56:48.848509220Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:48.850769 containerd[2023]: time="2026-01-23T23:56:48.850704092Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:56:48.850900 containerd[2023]: time="2026-01-23T23:56:48.850844468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:56:48.851112 kubelet[3333]: E0123 23:56:48.851066 3333 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:56:48.852537 kubelet[3333]: E0123 23:56:48.851124 3333 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:56:48.852537 kubelet[3333]: E0123 23:56:48.851247 3333 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-bsxtr_calico-system(16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:48.854960 containerd[2023]: time="2026-01-23T23:56:48.854765900Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:56:49.099451 containerd[2023]: time="2026-01-23T23:56:49.099295169Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:49.101555 containerd[2023]: time="2026-01-23T23:56:49.101395493Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:56:49.101555 containerd[2023]: time="2026-01-23T23:56:49.101496533Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:56:49.101802 kubelet[3333]: E0123 23:56:49.101739 3333 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:56:49.101917 kubelet[3333]: E0123 23:56:49.101810 3333 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:56:49.102234 kubelet[3333]: E0123 23:56:49.101976 3333 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-bsxtr_calico-system(16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:49.102431 kubelet[3333]: E0123 23:56:49.102376 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bsxtr" podUID="16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0" Jan 23 23:56:49.525826 containerd[2023]: time="2026-01-23T23:56:49.525767839Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 23:56:49.776087 containerd[2023]: time="2026-01-23T23:56:49.775861760Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:49.778258 containerd[2023]: time="2026-01-23T23:56:49.778093256Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 23:56:49.778258 containerd[2023]: time="2026-01-23T23:56:49.778195892Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 23:56:49.778463 kubelet[3333]: E0123 23:56:49.778416 3333 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:56:49.778557 kubelet[3333]: E0123 23:56:49.778475 3333 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:56:49.778614 kubelet[3333]: E0123 23:56:49.778576 3333 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-zhdzf_calico-system(b627f7db-d96f-4cdc-9084-8b79e8e215fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:49.778690 kubelet[3333]: E0123 23:56:49.778626 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zhdzf" podUID="b627f7db-d96f-4cdc-9084-8b79e8e215fb" Jan 23 23:56:52.865612 systemd[1]: Started sshd@20-172.31.18.95:22-4.153.228.146:51590.service - OpenSSH per-connection server daemon (4.153.228.146:51590). Jan 23 23:56:53.402951 sshd[6076]: Accepted publickey for core from 4.153.228.146 port 51590 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:53.405731 sshd[6076]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:53.417180 systemd-logind[1997]: New session 21 of user core. Jan 23 23:56:53.426250 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 23:56:53.529938 containerd[2023]: time="2026-01-23T23:56:53.526930943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:56:53.823604 containerd[2023]: time="2026-01-23T23:56:53.820987944Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:56:53.827675 containerd[2023]: time="2026-01-23T23:56:53.827459292Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:56:53.827675 containerd[2023]: time="2026-01-23T23:56:53.827618748Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:56:53.828836 kubelet[3333]: E0123 23:56:53.828111 3333 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:56:53.828836 kubelet[3333]: E0123 23:56:53.828176 3333 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:56:53.828836 kubelet[3333]: E0123 23:56:53.828275 3333 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-58854c8f84-79vx7_calico-apiserver(3ac30c2d-dd8c-4060-a356-77e0062bc1c4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:56:53.828836 kubelet[3333]: E0123 23:56:53.828326 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58854c8f84-79vx7" podUID="3ac30c2d-dd8c-4060-a356-77e0062bc1c4" Jan 23 23:56:53.940559 sshd[6076]: pam_unix(sshd:session): session closed for user core Jan 23 23:56:53.948976 systemd[1]: sshd@20-172.31.18.95:22-4.153.228.146:51590.service: Deactivated successfully. Jan 23 23:56:53.949624 systemd-logind[1997]: Session 21 logged out. Waiting for processes to exit. Jan 23 23:56:53.955881 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 23:56:53.965858 systemd-logind[1997]: Removed session 21. Jan 23 23:56:57.526747 kubelet[3333]: E0123 23:56:57.525986 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54c598b4dd-zn6ss" podUID="32710301-53d1-443d-ade3-ac9179beb56f" Jan 23 23:56:59.046375 systemd[1]: Started sshd@21-172.31.18.95:22-4.153.228.146:35646.service - OpenSSH per-connection server daemon (4.153.228.146:35646). Jan 23 23:56:59.527839 kubelet[3333]: E0123 23:56:59.526731 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5757ddb5fd-l52x5" podUID="1e83b674-bf5a-4da7-960a-435a24e8e6d1" Jan 23 23:56:59.602924 sshd[6090]: Accepted publickey for core from 4.153.228.146 port 35646 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:56:59.605577 sshd[6090]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:56:59.624453 systemd-logind[1997]: New session 22 of user core. Jan 23 23:56:59.629227 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 23:57:00.124670 sshd[6090]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:00.130057 systemd-logind[1997]: Session 22 logged out. Waiting for processes to exit. Jan 23 23:57:00.133494 systemd[1]: sshd@21-172.31.18.95:22-4.153.228.146:35646.service: Deactivated successfully. Jan 23 23:57:00.137548 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 23:57:00.143409 systemd-logind[1997]: Removed session 22. Jan 23 23:57:01.524338 kubelet[3333]: E0123 23:57:01.524214 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58854c8f84-dfgrz" podUID="88895574-7d47-4441-9e70-eebbca18d915" Jan 23 23:57:01.527537 kubelet[3333]: E0123 23:57:01.527376 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zhdzf" podUID="b627f7db-d96f-4cdc-9084-8b79e8e215fb" Jan 23 23:57:04.536422 kubelet[3333]: E0123 23:57:04.536355 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bsxtr" podUID="16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0" Jan 23 23:57:05.233632 systemd[1]: Started sshd@22-172.31.18.95:22-4.153.228.146:49506.service - OpenSSH per-connection server daemon (4.153.228.146:49506). Jan 23 23:57:05.526324 kubelet[3333]: E0123 23:57:05.526155 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58854c8f84-79vx7" podUID="3ac30c2d-dd8c-4060-a356-77e0062bc1c4" Jan 23 23:57:05.799411 sshd[6126]: Accepted publickey for core from 4.153.228.146 port 49506 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:05.804740 sshd[6126]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:05.816227 systemd-logind[1997]: New session 23 of user core. Jan 23 23:57:05.826261 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 23:57:06.380665 sshd[6126]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:06.392152 systemd[1]: sshd@22-172.31.18.95:22-4.153.228.146:49506.service: Deactivated successfully. Jan 23 23:57:06.398777 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 23:57:06.404603 systemd-logind[1997]: Session 23 logged out. Waiting for processes to exit. Jan 23 23:57:06.407230 systemd-logind[1997]: Removed session 23. Jan 23 23:57:10.468307 containerd[2023]: time="2026-01-23T23:57:10.467958387Z" level=info msg="StopPodSandbox for \"b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0\"" Jan 23 23:57:10.538211 kubelet[3333]: E0123 23:57:10.536922 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5757ddb5fd-l52x5" podUID="1e83b674-bf5a-4da7-960a-435a24e8e6d1" Jan 23 23:57:10.539970 kubelet[3333]: E0123 23:57:10.538375 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54c598b4dd-zn6ss" podUID="32710301-53d1-443d-ade3-ac9179beb56f" Jan 23 23:57:10.721216 containerd[2023]: 2026-01-23 23:57:10.602 [WARNING][6148] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--dfgrz-eth0", GenerateName:"calico-apiserver-58854c8f84-", Namespace:"calico-apiserver", SelfLink:"", UID:"88895574-7d47-4441-9e70-eebbca18d915", ResourceVersion:"1501", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58854c8f84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-95", ContainerID:"ce57427962e993fd428c982a6183ae2fe35a325802fe0ab98d09b826838c0c65", Pod:"calico-apiserver-58854c8f84-dfgrz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaba3d10cdf0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:10.721216 containerd[2023]: 2026-01-23 23:57:10.603 [INFO][6148] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" Jan 23 23:57:10.721216 containerd[2023]: 2026-01-23 23:57:10.603 [INFO][6148] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" iface="eth0" netns="" Jan 23 23:57:10.721216 containerd[2023]: 2026-01-23 23:57:10.603 [INFO][6148] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" Jan 23 23:57:10.721216 containerd[2023]: 2026-01-23 23:57:10.603 [INFO][6148] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" Jan 23 23:57:10.721216 containerd[2023]: 2026-01-23 23:57:10.690 [INFO][6155] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" HandleID="k8s-pod-network.b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" Workload="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--dfgrz-eth0" Jan 23 23:57:10.721216 containerd[2023]: 2026-01-23 23:57:10.692 [INFO][6155] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:10.721216 containerd[2023]: 2026-01-23 23:57:10.692 [INFO][6155] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:10.721216 containerd[2023]: 2026-01-23 23:57:10.708 [WARNING][6155] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" HandleID="k8s-pod-network.b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" Workload="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--dfgrz-eth0" Jan 23 23:57:10.721216 containerd[2023]: 2026-01-23 23:57:10.708 [INFO][6155] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" HandleID="k8s-pod-network.b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" Workload="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--dfgrz-eth0" Jan 23 23:57:10.721216 containerd[2023]: 2026-01-23 23:57:10.710 [INFO][6155] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:10.721216 containerd[2023]: 2026-01-23 23:57:10.717 [INFO][6148] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" Jan 23 23:57:10.721216 containerd[2023]: time="2026-01-23T23:57:10.720666172Z" level=info msg="TearDown network for sandbox \"b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0\" successfully" Jan 23 23:57:10.721216 containerd[2023]: time="2026-01-23T23:57:10.720710392Z" level=info msg="StopPodSandbox for \"b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0\" returns successfully" Jan 23 23:57:10.724416 containerd[2023]: time="2026-01-23T23:57:10.723139936Z" level=info msg="RemovePodSandbox for \"b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0\"" Jan 23 23:57:10.724416 containerd[2023]: time="2026-01-23T23:57:10.723200188Z" level=info msg="Forcibly stopping sandbox \"b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0\"" Jan 23 23:57:10.952502 containerd[2023]: 2026-01-23 23:57:10.815 [WARNING][6169] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--dfgrz-eth0", GenerateName:"calico-apiserver-58854c8f84-", Namespace:"calico-apiserver", SelfLink:"", UID:"88895574-7d47-4441-9e70-eebbca18d915", ResourceVersion:"1501", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58854c8f84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-95", ContainerID:"ce57427962e993fd428c982a6183ae2fe35a325802fe0ab98d09b826838c0c65", Pod:"calico-apiserver-58854c8f84-dfgrz", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"caliaba3d10cdf0", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:10.952502 containerd[2023]: 2026-01-23 23:57:10.815 [INFO][6169] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" Jan 23 23:57:10.952502 containerd[2023]: 2026-01-23 23:57:10.815 [INFO][6169] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" iface="eth0" netns="" Jan 23 23:57:10.952502 containerd[2023]: 2026-01-23 23:57:10.815 [INFO][6169] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" Jan 23 23:57:10.952502 containerd[2023]: 2026-01-23 23:57:10.815 [INFO][6169] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" Jan 23 23:57:10.952502 containerd[2023]: 2026-01-23 23:57:10.872 [INFO][6177] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" HandleID="k8s-pod-network.b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" Workload="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--dfgrz-eth0" Jan 23 23:57:10.952502 containerd[2023]: 2026-01-23 23:57:10.874 [INFO][6177] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:10.952502 containerd[2023]: 2026-01-23 23:57:10.875 [INFO][6177] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:10.952502 containerd[2023]: 2026-01-23 23:57:10.911 [WARNING][6177] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" HandleID="k8s-pod-network.b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" Workload="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--dfgrz-eth0" Jan 23 23:57:10.952502 containerd[2023]: 2026-01-23 23:57:10.911 [INFO][6177] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" HandleID="k8s-pod-network.b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" Workload="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--dfgrz-eth0" Jan 23 23:57:10.952502 containerd[2023]: 2026-01-23 23:57:10.943 [INFO][6177] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:10.952502 containerd[2023]: 2026-01-23 23:57:10.949 [INFO][6169] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0" Jan 23 23:57:10.952502 containerd[2023]: time="2026-01-23T23:57:10.952465158Z" level=info msg="TearDown network for sandbox \"b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0\" successfully" Jan 23 23:57:10.960456 containerd[2023]: time="2026-01-23T23:57:10.960360222Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:57:10.960660 containerd[2023]: time="2026-01-23T23:57:10.960466974Z" level=info msg="RemovePodSandbox \"b798ea033b018978dd8ac511f6a05ed6d14815e443f9e9c92e5f47a389944fa0\" returns successfully" Jan 23 23:57:10.961467 containerd[2023]: time="2026-01-23T23:57:10.961400946Z" level=info msg="StopPodSandbox for \"1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09\"" Jan 23 23:57:11.186220 containerd[2023]: 2026-01-23 23:57:11.058 [WARNING][6191] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--79vx7-eth0", GenerateName:"calico-apiserver-58854c8f84-", Namespace:"calico-apiserver", SelfLink:"", UID:"3ac30c2d-dd8c-4060-a356-77e0062bc1c4", ResourceVersion:"1524", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58854c8f84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-95", ContainerID:"0f1347573791cc04f9335d5e84b0e988435e7d3a1b0ed95bd83f409d6d86ce75", Pod:"calico-apiserver-58854c8f84-79vx7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali151d7460f0f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:11.186220 containerd[2023]: 2026-01-23 23:57:11.060 [INFO][6191] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" Jan 23 23:57:11.186220 containerd[2023]: 2026-01-23 23:57:11.061 [INFO][6191] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" iface="eth0" netns="" Jan 23 23:57:11.186220 containerd[2023]: 2026-01-23 23:57:11.061 [INFO][6191] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" Jan 23 23:57:11.186220 containerd[2023]: 2026-01-23 23:57:11.061 [INFO][6191] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" Jan 23 23:57:11.186220 containerd[2023]: 2026-01-23 23:57:11.132 [INFO][6198] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" HandleID="k8s-pod-network.1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" Workload="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--79vx7-eth0" Jan 23 23:57:11.186220 containerd[2023]: 2026-01-23 23:57:11.132 [INFO][6198] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:11.186220 containerd[2023]: 2026-01-23 23:57:11.132 [INFO][6198] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:11.186220 containerd[2023]: 2026-01-23 23:57:11.168 [WARNING][6198] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" HandleID="k8s-pod-network.1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" Workload="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--79vx7-eth0" Jan 23 23:57:11.186220 containerd[2023]: 2026-01-23 23:57:11.168 [INFO][6198] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" HandleID="k8s-pod-network.1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" Workload="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--79vx7-eth0" Jan 23 23:57:11.186220 containerd[2023]: 2026-01-23 23:57:11.175 [INFO][6198] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:11.186220 containerd[2023]: 2026-01-23 23:57:11.180 [INFO][6191] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" Jan 23 23:57:11.186220 containerd[2023]: time="2026-01-23T23:57:11.184498503Z" level=info msg="TearDown network for sandbox \"1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09\" successfully" Jan 23 23:57:11.186220 containerd[2023]: time="2026-01-23T23:57:11.184536351Z" level=info msg="StopPodSandbox for \"1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09\" returns successfully" Jan 23 23:57:11.189265 containerd[2023]: time="2026-01-23T23:57:11.186660867Z" level=info msg="RemovePodSandbox for \"1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09\"" Jan 23 23:57:11.189265 containerd[2023]: time="2026-01-23T23:57:11.186711267Z" level=info msg="Forcibly stopping sandbox \"1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09\"" Jan 23 23:57:11.383988 containerd[2023]: 2026-01-23 23:57:11.287 [WARNING][6213] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--79vx7-eth0", GenerateName:"calico-apiserver-58854c8f84-", Namespace:"calico-apiserver", SelfLink:"", UID:"3ac30c2d-dd8c-4060-a356-77e0062bc1c4", ResourceVersion:"1524", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 26, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"58854c8f84", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-95", ContainerID:"0f1347573791cc04f9335d5e84b0e988435e7d3a1b0ed95bd83f409d6d86ce75", Pod:"calico-apiserver-58854c8f84-79vx7", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.125.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali151d7460f0f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:11.383988 containerd[2023]: 2026-01-23 23:57:11.288 [INFO][6213] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" Jan 23 23:57:11.383988 containerd[2023]: 2026-01-23 23:57:11.288 [INFO][6213] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" iface="eth0" netns="" Jan 23 23:57:11.383988 containerd[2023]: 2026-01-23 23:57:11.288 [INFO][6213] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" Jan 23 23:57:11.383988 containerd[2023]: 2026-01-23 23:57:11.288 [INFO][6213] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" Jan 23 23:57:11.383988 containerd[2023]: 2026-01-23 23:57:11.345 [INFO][6220] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" HandleID="k8s-pod-network.1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" Workload="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--79vx7-eth0" Jan 23 23:57:11.383988 containerd[2023]: 2026-01-23 23:57:11.348 [INFO][6220] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:11.383988 containerd[2023]: 2026-01-23 23:57:11.348 [INFO][6220] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:11.383988 containerd[2023]: 2026-01-23 23:57:11.370 [WARNING][6220] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" HandleID="k8s-pod-network.1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" Workload="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--79vx7-eth0" Jan 23 23:57:11.383988 containerd[2023]: 2026-01-23 23:57:11.370 [INFO][6220] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" HandleID="k8s-pod-network.1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" Workload="ip--172--31--18--95-k8s-calico--apiserver--58854c8f84--79vx7-eth0" Jan 23 23:57:11.383988 containerd[2023]: 2026-01-23 23:57:11.374 [INFO][6220] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:11.383988 containerd[2023]: 2026-01-23 23:57:11.379 [INFO][6213] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09" Jan 23 23:57:11.384782 containerd[2023]: time="2026-01-23T23:57:11.384039280Z" level=info msg="TearDown network for sandbox \"1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09\" successfully" Jan 23 23:57:11.391974 containerd[2023]: time="2026-01-23T23:57:11.391803460Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:57:11.392137 containerd[2023]: time="2026-01-23T23:57:11.392008084Z" level=info msg="RemovePodSandbox \"1349706bd12b6ed6b10644339e49809f3885b9f1b8a39ce27a2eeb60c7212b09\" returns successfully" Jan 23 23:57:11.393942 containerd[2023]: time="2026-01-23T23:57:11.392789104Z" level=info msg="StopPodSandbox for \"534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485\"" Jan 23 23:57:11.476447 systemd[1]: Started sshd@23-172.31.18.95:22-4.153.228.146:49512.service - OpenSSH per-connection server daemon (4.153.228.146:49512). Jan 23 23:57:11.581439 containerd[2023]: 2026-01-23 23:57:11.486 [WARNING][6235] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--95-k8s-goldmane--7c778bb748--zhdzf-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"b627f7db-d96f-4cdc-9084-8b79e8e215fb", ResourceVersion:"1505", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-95", ContainerID:"565aae90d756d927c88dd2c2777ae0c0d0140164804ffaf4e154c2ba70645188", Pod:"goldmane-7c778bb748-zhdzf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.125.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali05baba26f89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:11.581439 containerd[2023]: 2026-01-23 23:57:11.487 [INFO][6235] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" Jan 23 23:57:11.581439 containerd[2023]: 2026-01-23 23:57:11.487 [INFO][6235] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" iface="eth0" netns="" Jan 23 23:57:11.581439 containerd[2023]: 2026-01-23 23:57:11.487 [INFO][6235] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" Jan 23 23:57:11.581439 containerd[2023]: 2026-01-23 23:57:11.487 [INFO][6235] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" Jan 23 23:57:11.581439 containerd[2023]: 2026-01-23 23:57:11.555 [INFO][6244] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" HandleID="k8s-pod-network.534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" Workload="ip--172--31--18--95-k8s-goldmane--7c778bb748--zhdzf-eth0" Jan 23 23:57:11.581439 containerd[2023]: 2026-01-23 23:57:11.555 [INFO][6244] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:11.581439 containerd[2023]: 2026-01-23 23:57:11.555 [INFO][6244] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:11.581439 containerd[2023]: 2026-01-23 23:57:11.571 [WARNING][6244] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" HandleID="k8s-pod-network.534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" Workload="ip--172--31--18--95-k8s-goldmane--7c778bb748--zhdzf-eth0" Jan 23 23:57:11.581439 containerd[2023]: 2026-01-23 23:57:11.571 [INFO][6244] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" HandleID="k8s-pod-network.534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" Workload="ip--172--31--18--95-k8s-goldmane--7c778bb748--zhdzf-eth0" Jan 23 23:57:11.581439 containerd[2023]: 2026-01-23 23:57:11.573 [INFO][6244] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:11.581439 containerd[2023]: 2026-01-23 23:57:11.576 [INFO][6235] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" Jan 23 23:57:11.583636 containerd[2023]: time="2026-01-23T23:57:11.583020461Z" level=info msg="TearDown network for sandbox \"534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485\" successfully" Jan 23 23:57:11.583636 containerd[2023]: time="2026-01-23T23:57:11.583098161Z" level=info msg="StopPodSandbox for \"534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485\" returns successfully" Jan 23 23:57:11.585121 containerd[2023]: time="2026-01-23T23:57:11.584386265Z" level=info msg="RemovePodSandbox for \"534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485\"" Jan 23 23:57:11.585121 containerd[2023]: time="2026-01-23T23:57:11.584438837Z" level=info msg="Forcibly stopping sandbox \"534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485\"" Jan 23 23:57:11.769870 containerd[2023]: 2026-01-23 23:57:11.686 [WARNING][6259] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--95-k8s-goldmane--7c778bb748--zhdzf-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"b627f7db-d96f-4cdc-9084-8b79e8e215fb", ResourceVersion:"1505", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 35, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-95", ContainerID:"565aae90d756d927c88dd2c2777ae0c0d0140164804ffaf4e154c2ba70645188", Pod:"goldmane-7c778bb748-zhdzf", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.125.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali05baba26f89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:11.769870 containerd[2023]: 2026-01-23 23:57:11.687 [INFO][6259] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" Jan 23 23:57:11.769870 containerd[2023]: 2026-01-23 23:57:11.687 [INFO][6259] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" iface="eth0" netns="" Jan 23 23:57:11.769870 containerd[2023]: 2026-01-23 23:57:11.687 [INFO][6259] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" Jan 23 23:57:11.769870 containerd[2023]: 2026-01-23 23:57:11.687 [INFO][6259] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" Jan 23 23:57:11.769870 containerd[2023]: 2026-01-23 23:57:11.740 [INFO][6267] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" HandleID="k8s-pod-network.534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" Workload="ip--172--31--18--95-k8s-goldmane--7c778bb748--zhdzf-eth0" Jan 23 23:57:11.769870 containerd[2023]: 2026-01-23 23:57:11.741 [INFO][6267] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:11.769870 containerd[2023]: 2026-01-23 23:57:11.741 [INFO][6267] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:11.769870 containerd[2023]: 2026-01-23 23:57:11.757 [WARNING][6267] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" HandleID="k8s-pod-network.534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" Workload="ip--172--31--18--95-k8s-goldmane--7c778bb748--zhdzf-eth0" Jan 23 23:57:11.769870 containerd[2023]: 2026-01-23 23:57:11.758 [INFO][6267] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" HandleID="k8s-pod-network.534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" Workload="ip--172--31--18--95-k8s-goldmane--7c778bb748--zhdzf-eth0" Jan 23 23:57:11.769870 containerd[2023]: 2026-01-23 23:57:11.760 [INFO][6267] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:11.769870 containerd[2023]: 2026-01-23 23:57:11.765 [INFO][6259] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485" Jan 23 23:57:11.769870 containerd[2023]: time="2026-01-23T23:57:11.768725838Z" level=info msg="TearDown network for sandbox \"534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485\" successfully" Jan 23 23:57:11.777328 containerd[2023]: time="2026-01-23T23:57:11.777233142Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:57:11.777476 containerd[2023]: time="2026-01-23T23:57:11.777339426Z" level=info msg="RemovePodSandbox \"534bafd33e537dd3b92acca89f5722d588b55827af3d6a6b9835779837a74485\" returns successfully" Jan 23 23:57:11.778585 containerd[2023]: time="2026-01-23T23:57:11.777990582Z" level=info msg="StopPodSandbox for \"ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702\"" Jan 23 23:57:11.927089 containerd[2023]: 2026-01-23 23:57:11.851 [WARNING][6282] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--95-k8s-coredns--66bc5c9577--jl7gt-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"75d141a9-546a-4b46-adcf-a6cd7a6e3073", ResourceVersion:"1159", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-95", ContainerID:"d6664e073b32a5a13551ce211a6149af648c7070e4e502e0b628472d820271dc", Pod:"coredns-66bc5c9577-jl7gt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3218237363b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:11.927089 containerd[2023]: 2026-01-23 23:57:11.852 [INFO][6282] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" Jan 23 23:57:11.927089 containerd[2023]: 2026-01-23 23:57:11.853 [INFO][6282] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" iface="eth0" netns="" Jan 23 23:57:11.927089 containerd[2023]: 2026-01-23 23:57:11.854 [INFO][6282] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" Jan 23 23:57:11.927089 containerd[2023]: 2026-01-23 23:57:11.854 [INFO][6282] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" Jan 23 23:57:11.927089 containerd[2023]: 2026-01-23 23:57:11.902 [INFO][6289] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" HandleID="k8s-pod-network.ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" Workload="ip--172--31--18--95-k8s-coredns--66bc5c9577--jl7gt-eth0" Jan 23 23:57:11.927089 containerd[2023]: 2026-01-23 23:57:11.903 [INFO][6289] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:11.927089 containerd[2023]: 2026-01-23 23:57:11.903 [INFO][6289] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:11.927089 containerd[2023]: 2026-01-23 23:57:11.916 [WARNING][6289] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" HandleID="k8s-pod-network.ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" Workload="ip--172--31--18--95-k8s-coredns--66bc5c9577--jl7gt-eth0" Jan 23 23:57:11.927089 containerd[2023]: 2026-01-23 23:57:11.917 [INFO][6289] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" HandleID="k8s-pod-network.ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" Workload="ip--172--31--18--95-k8s-coredns--66bc5c9577--jl7gt-eth0" Jan 23 23:57:11.927089 containerd[2023]: 2026-01-23 23:57:11.919 [INFO][6289] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:11.927089 containerd[2023]: 2026-01-23 23:57:11.924 [INFO][6282] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" Jan 23 23:57:11.928371 containerd[2023]: time="2026-01-23T23:57:11.928107906Z" level=info msg="TearDown network for sandbox \"ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702\" successfully" Jan 23 23:57:11.928371 containerd[2023]: time="2026-01-23T23:57:11.928163058Z" level=info msg="StopPodSandbox for \"ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702\" returns successfully" Jan 23 23:57:11.929397 containerd[2023]: time="2026-01-23T23:57:11.929340018Z" level=info msg="RemovePodSandbox for \"ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702\"" Jan 23 23:57:11.929523 containerd[2023]: time="2026-01-23T23:57:11.929444706Z" level=info msg="Forcibly stopping sandbox \"ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702\"" Jan 23 23:57:12.053835 sshd[6242]: Accepted publickey for core from 4.153.228.146 port 49512 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:12.059432 sshd[6242]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:12.074201 systemd-logind[1997]: New session 24 of user core. Jan 23 23:57:12.082585 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 23:57:12.101955 containerd[2023]: 2026-01-23 23:57:12.006 [WARNING][6303] cni-plugin/k8s.go 604: CNI_CONTAINERID does not match WorkloadEndpoint ContainerID, don't delete WEP. ContainerID="ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" WorkloadEndpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--95-k8s-coredns--66bc5c9577--jl7gt-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"75d141a9-546a-4b46-adcf-a6cd7a6e3073", ResourceVersion:"1159", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 23, 55, 13, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-95", ContainerID:"d6664e073b32a5a13551ce211a6149af648c7070e4e502e0b628472d820271dc", Pod:"coredns-66bc5c9577-jl7gt", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.125.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3218237363b", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 23:57:12.101955 containerd[2023]: 2026-01-23 23:57:12.008 [INFO][6303] cni-plugin/k8s.go 640: Cleaning up netns ContainerID="ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" Jan 23 23:57:12.101955 containerd[2023]: 2026-01-23 23:57:12.008 [INFO][6303] cni-plugin/dataplane_linux.go 555: CleanUpNamespace called with no netns name, ignoring. ContainerID="ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" iface="eth0" netns="" Jan 23 23:57:12.101955 containerd[2023]: 2026-01-23 23:57:12.008 [INFO][6303] cni-plugin/k8s.go 647: Releasing IP address(es) ContainerID="ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" Jan 23 23:57:12.101955 containerd[2023]: 2026-01-23 23:57:12.008 [INFO][6303] cni-plugin/utils.go 188: Calico CNI releasing IP address ContainerID="ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" Jan 23 23:57:12.101955 containerd[2023]: 2026-01-23 23:57:12.053 [INFO][6310] ipam/ipam_plugin.go 436: Releasing address using handleID ContainerID="ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" HandleID="k8s-pod-network.ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" Workload="ip--172--31--18--95-k8s-coredns--66bc5c9577--jl7gt-eth0" Jan 23 23:57:12.101955 containerd[2023]: 2026-01-23 23:57:12.053 [INFO][6310] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 23:57:12.101955 containerd[2023]: 2026-01-23 23:57:12.053 [INFO][6310] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 23:57:12.101955 containerd[2023]: 2026-01-23 23:57:12.080 [WARNING][6310] ipam/ipam_plugin.go 453: Asked to release address but it doesn't exist. Ignoring ContainerID="ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" HandleID="k8s-pod-network.ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" Workload="ip--172--31--18--95-k8s-coredns--66bc5c9577--jl7gt-eth0" Jan 23 23:57:12.101955 containerd[2023]: 2026-01-23 23:57:12.081 [INFO][6310] ipam/ipam_plugin.go 464: Releasing address using workloadID ContainerID="ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" HandleID="k8s-pod-network.ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" Workload="ip--172--31--18--95-k8s-coredns--66bc5c9577--jl7gt-eth0" Jan 23 23:57:12.101955 containerd[2023]: 2026-01-23 23:57:12.091 [INFO][6310] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 23:57:12.101955 containerd[2023]: 2026-01-23 23:57:12.097 [INFO][6303] cni-plugin/k8s.go 653: Teardown processing complete. ContainerID="ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702" Jan 23 23:57:12.104938 containerd[2023]: time="2026-01-23T23:57:12.102140043Z" level=info msg="TearDown network for sandbox \"ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702\" successfully" Jan 23 23:57:12.109379 containerd[2023]: time="2026-01-23T23:57:12.109306599Z" level=warning msg="Failed to get podSandbox status for container event for sandboxID \"ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702\": an error occurred when try to find sandbox: not found. Sending the event with nil podSandboxStatus." Jan 23 23:57:12.109584 containerd[2023]: time="2026-01-23T23:57:12.109552611Z" level=info msg="RemovePodSandbox \"ce8c8c661389865e373783284278e7bc1e33ca64978d42004dda9c7dd2063702\" returns successfully" Jan 23 23:57:12.616835 sshd[6242]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:12.627267 systemd[1]: sshd@23-172.31.18.95:22-4.153.228.146:49512.service: Deactivated successfully. Jan 23 23:57:12.628042 systemd-logind[1997]: Session 24 logged out. Waiting for processes to exit. Jan 23 23:57:12.635251 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 23:57:12.640119 systemd-logind[1997]: Removed session 24. Jan 23 23:57:15.524143 kubelet[3333]: E0123 23:57:15.524013 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zhdzf" podUID="b627f7db-d96f-4cdc-9084-8b79e8e215fb" Jan 23 23:57:16.528949 kubelet[3333]: E0123 23:57:16.527190 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58854c8f84-dfgrz" podUID="88895574-7d47-4441-9e70-eebbca18d915" Jan 23 23:57:16.532438 kubelet[3333]: E0123 23:57:16.532355 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bsxtr" podUID="16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0" Jan 23 23:57:17.526051 kubelet[3333]: E0123 23:57:17.525635 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58854c8f84-79vx7" podUID="3ac30c2d-dd8c-4060-a356-77e0062bc1c4" Jan 23 23:57:17.734626 systemd[1]: Started sshd@24-172.31.18.95:22-4.153.228.146:56630.service - OpenSSH per-connection server daemon (4.153.228.146:56630). Jan 23 23:57:18.296227 sshd[6329]: Accepted publickey for core from 4.153.228.146 port 56630 ssh2: RSA SHA256:5AacvNrSqCkKxn2Zg0NJQDqu4zeDYsnVAvlYY2DAYUI Jan 23 23:57:18.300710 sshd[6329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 23:57:18.329238 systemd-logind[1997]: New session 25 of user core. Jan 23 23:57:18.335979 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 23:57:18.838478 sshd[6329]: pam_unix(sshd:session): session closed for user core Jan 23 23:57:18.853106 systemd[1]: sshd@24-172.31.18.95:22-4.153.228.146:56630.service: Deactivated successfully. Jan 23 23:57:18.861090 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 23:57:18.864044 systemd-logind[1997]: Session 25 logged out. Waiting for processes to exit. Jan 23 23:57:18.866993 systemd-logind[1997]: Removed session 25. Jan 23 23:57:22.525719 kubelet[3333]: E0123 23:57:22.525639 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54c598b4dd-zn6ss" podUID="32710301-53d1-443d-ade3-ac9179beb56f" Jan 23 23:57:23.523362 kubelet[3333]: E0123 23:57:23.523247 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5757ddb5fd-l52x5" podUID="1e83b674-bf5a-4da7-960a-435a24e8e6d1" Jan 23 23:57:28.527264 kubelet[3333]: E0123 23:57:28.527056 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bsxtr" podUID="16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0" Jan 23 23:57:28.528778 containerd[2023]: time="2026-01-23T23:57:28.528708513Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:57:28.778544 containerd[2023]: time="2026-01-23T23:57:28.778095490Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:28.780431 containerd[2023]: time="2026-01-23T23:57:28.780288502Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:57:28.780431 containerd[2023]: time="2026-01-23T23:57:28.780363154Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:57:28.780726 kubelet[3333]: E0123 23:57:28.780576 3333 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:28.780726 kubelet[3333]: E0123 23:57:28.780632 3333 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:28.780726 kubelet[3333]: E0123 23:57:28.780740 3333 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-58854c8f84-dfgrz_calico-apiserver(88895574-7d47-4441-9e70-eebbca18d915): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:28.780726 kubelet[3333]: E0123 23:57:28.780793 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58854c8f84-dfgrz" podUID="88895574-7d47-4441-9e70-eebbca18d915" Jan 23 23:57:29.523763 kubelet[3333]: E0123 23:57:29.523629 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58854c8f84-79vx7" podUID="3ac30c2d-dd8c-4060-a356-77e0062bc1c4" Jan 23 23:57:30.525938 containerd[2023]: time="2026-01-23T23:57:30.525668339Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 23:57:31.033179 containerd[2023]: time="2026-01-23T23:57:31.033082557Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:31.035600 containerd[2023]: time="2026-01-23T23:57:31.035530101Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 23:57:31.035691 containerd[2023]: time="2026-01-23T23:57:31.035661237Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 23:57:31.035964 kubelet[3333]: E0123 23:57:31.035853 3333 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:57:31.036489 kubelet[3333]: E0123 23:57:31.035968 3333 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 23:57:31.036489 kubelet[3333]: E0123 23:57:31.036080 3333 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-zhdzf_calico-system(b627f7db-d96f-4cdc-9084-8b79e8e215fb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:31.036489 kubelet[3333]: E0123 23:57:31.036133 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-zhdzf" podUID="b627f7db-d96f-4cdc-9084-8b79e8e215fb" Jan 23 23:57:33.765717 systemd[1]: cri-containerd-d3dd1281346e78a6352e0d1cd3f4968689e5e2df30f770719f64e4c7a7403f5a.scope: Deactivated successfully. Jan 23 23:57:33.766301 systemd[1]: cri-containerd-d3dd1281346e78a6352e0d1cd3f4968689e5e2df30f770719f64e4c7a7403f5a.scope: Consumed 25.496s CPU time. Jan 23 23:57:33.812608 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d3dd1281346e78a6352e0d1cd3f4968689e5e2df30f770719f64e4c7a7403f5a-rootfs.mount: Deactivated successfully. Jan 23 23:57:33.826853 containerd[2023]: time="2026-01-23T23:57:33.826739463Z" level=info msg="shim disconnected" id=d3dd1281346e78a6352e0d1cd3f4968689e5e2df30f770719f64e4c7a7403f5a namespace=k8s.io Jan 23 23:57:33.826853 containerd[2023]: time="2026-01-23T23:57:33.826850763Z" level=warning msg="cleaning up after shim disconnected" id=d3dd1281346e78a6352e0d1cd3f4968689e5e2df30f770719f64e4c7a7403f5a namespace=k8s.io Jan 23 23:57:33.827974 containerd[2023]: time="2026-01-23T23:57:33.826872879Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:57:33.921614 systemd[1]: cri-containerd-533679ca15875edc9e7aaeb66de13bb43cbf569175efbdde21f665d53df05e56.scope: Deactivated successfully. Jan 23 23:57:33.923532 systemd[1]: cri-containerd-533679ca15875edc9e7aaeb66de13bb43cbf569175efbdde21f665d53df05e56.scope: Consumed 6.256s CPU time, 17.6M memory peak, 0B memory swap peak. Jan 23 23:57:33.968812 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-533679ca15875edc9e7aaeb66de13bb43cbf569175efbdde21f665d53df05e56-rootfs.mount: Deactivated successfully. Jan 23 23:57:33.978113 containerd[2023]: time="2026-01-23T23:57:33.978000616Z" level=info msg="shim disconnected" id=533679ca15875edc9e7aaeb66de13bb43cbf569175efbdde21f665d53df05e56 namespace=k8s.io Jan 23 23:57:33.978584 containerd[2023]: time="2026-01-23T23:57:33.978380104Z" level=warning msg="cleaning up after shim disconnected" id=533679ca15875edc9e7aaeb66de13bb43cbf569175efbdde21f665d53df05e56 namespace=k8s.io Jan 23 23:57:33.978584 containerd[2023]: time="2026-01-23T23:57:33.978404872Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:57:34.509196 kubelet[3333]: I0123 23:57:34.509145 3333 scope.go:117] "RemoveContainer" containerID="533679ca15875edc9e7aaeb66de13bb43cbf569175efbdde21f665d53df05e56" Jan 23 23:57:34.516545 kubelet[3333]: I0123 23:57:34.515519 3333 scope.go:117] "RemoveContainer" containerID="d3dd1281346e78a6352e0d1cd3f4968689e5e2df30f770719f64e4c7a7403f5a" Jan 23 23:57:34.518206 containerd[2023]: time="2026-01-23T23:57:34.518150571Z" level=info msg="CreateContainer within sandbox \"453d459784bf48f42929e710338c7e36a13fd2884581d665b6ee0fb13f0579ca\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 23:57:34.521674 containerd[2023]: time="2026-01-23T23:57:34.521355555Z" level=info msg="CreateContainer within sandbox \"ea175fa49ace55f98d85b38a08f92eff77407b991c409e273ac43a53764353bc\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 23 23:57:34.527013 containerd[2023]: time="2026-01-23T23:57:34.526657551Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 23:57:34.556450 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1413887095.mount: Deactivated successfully. Jan 23 23:57:34.569867 containerd[2023]: time="2026-01-23T23:57:34.569771943Z" level=info msg="CreateContainer within sandbox \"ea175fa49ace55f98d85b38a08f92eff77407b991c409e273ac43a53764353bc\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"92c2cf68f10143ef4fef6034aba334dbba731f96cfa2cd7f7133646b89138c77\"" Jan 23 23:57:34.573268 containerd[2023]: time="2026-01-23T23:57:34.573194859Z" level=info msg="StartContainer for \"92c2cf68f10143ef4fef6034aba334dbba731f96cfa2cd7f7133646b89138c77\"" Jan 23 23:57:34.585924 containerd[2023]: time="2026-01-23T23:57:34.584185011Z" level=info msg="CreateContainer within sandbox \"453d459784bf48f42929e710338c7e36a13fd2884581d665b6ee0fb13f0579ca\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"268b1013baa162f2bd2454cebad37f6a2a1412a6c31cf0effaf619840ad8b5cf\"" Jan 23 23:57:34.585924 containerd[2023]: time="2026-01-23T23:57:34.585054723Z" level=info msg="StartContainer for \"268b1013baa162f2bd2454cebad37f6a2a1412a6c31cf0effaf619840ad8b5cf\"" Jan 23 23:57:34.639526 systemd[1]: Started cri-containerd-92c2cf68f10143ef4fef6034aba334dbba731f96cfa2cd7f7133646b89138c77.scope - libcontainer container 92c2cf68f10143ef4fef6034aba334dbba731f96cfa2cd7f7133646b89138c77. Jan 23 23:57:34.659318 systemd[1]: Started cri-containerd-268b1013baa162f2bd2454cebad37f6a2a1412a6c31cf0effaf619840ad8b5cf.scope - libcontainer container 268b1013baa162f2bd2454cebad37f6a2a1412a6c31cf0effaf619840ad8b5cf. Jan 23 23:57:34.743651 containerd[2023]: time="2026-01-23T23:57:34.743578240Z" level=info msg="StartContainer for \"92c2cf68f10143ef4fef6034aba334dbba731f96cfa2cd7f7133646b89138c77\" returns successfully" Jan 23 23:57:34.773869 containerd[2023]: time="2026-01-23T23:57:34.773685928Z" level=info msg="StartContainer for \"268b1013baa162f2bd2454cebad37f6a2a1412a6c31cf0effaf619840ad8b5cf\" returns successfully" Jan 23 23:57:34.793206 containerd[2023]: time="2026-01-23T23:57:34.793110292Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:34.795562 containerd[2023]: time="2026-01-23T23:57:34.795441064Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 23:57:34.796043 containerd[2023]: time="2026-01-23T23:57:34.795615712Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 23:57:34.796175 kubelet[3333]: E0123 23:57:34.795846 3333 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:57:34.796175 kubelet[3333]: E0123 23:57:34.795929 3333 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 23:57:34.796175 kubelet[3333]: E0123 23:57:34.796043 3333 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-54c598b4dd-zn6ss_calico-system(32710301-53d1-443d-ade3-ac9179beb56f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:34.796175 kubelet[3333]: E0123 23:57:34.796100 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-54c598b4dd-zn6ss" podUID="32710301-53d1-443d-ade3-ac9179beb56f" Jan 23 23:57:38.442255 systemd[1]: cri-containerd-3819ca16411f9640f908028776e41413ee4538d7e6447472fcf2023038829300.scope: Deactivated successfully. Jan 23 23:57:38.445161 systemd[1]: cri-containerd-3819ca16411f9640f908028776e41413ee4538d7e6447472fcf2023038829300.scope: Consumed 5.834s CPU time, 16.3M memory peak, 0B memory swap peak. Jan 23 23:57:38.502178 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3819ca16411f9640f908028776e41413ee4538d7e6447472fcf2023038829300-rootfs.mount: Deactivated successfully. Jan 23 23:57:38.517424 containerd[2023]: time="2026-01-23T23:57:38.517065378Z" level=info msg="shim disconnected" id=3819ca16411f9640f908028776e41413ee4538d7e6447472fcf2023038829300 namespace=k8s.io Jan 23 23:57:38.517424 containerd[2023]: time="2026-01-23T23:57:38.517145802Z" level=warning msg="cleaning up after shim disconnected" id=3819ca16411f9640f908028776e41413ee4538d7e6447472fcf2023038829300 namespace=k8s.io Jan 23 23:57:38.517424 containerd[2023]: time="2026-01-23T23:57:38.517165638Z" level=info msg="cleaning up dead shim" namespace=k8s.io Jan 23 23:57:38.526992 containerd[2023]: time="2026-01-23T23:57:38.526376730Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 23:57:38.825247 containerd[2023]: time="2026-01-23T23:57:38.825180716Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:38.827552 containerd[2023]: time="2026-01-23T23:57:38.827453288Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 23:57:38.828424 containerd[2023]: time="2026-01-23T23:57:38.827535284Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 23:57:38.828522 kubelet[3333]: E0123 23:57:38.827869 3333 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:57:38.828522 kubelet[3333]: E0123 23:57:38.827971 3333 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 23:57:38.828522 kubelet[3333]: E0123 23:57:38.828077 3333 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-5757ddb5fd-l52x5_calico-system(1e83b674-bf5a-4da7-960a-435a24e8e6d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:38.830958 containerd[2023]: time="2026-01-23T23:57:38.830656832Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 23:57:39.095706 containerd[2023]: time="2026-01-23T23:57:39.095543705Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:39.097932 containerd[2023]: time="2026-01-23T23:57:39.097825973Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 23:57:39.098174 containerd[2023]: time="2026-01-23T23:57:39.097877393Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 23:57:39.098243 kubelet[3333]: E0123 23:57:39.098168 3333 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:57:39.098243 kubelet[3333]: E0123 23:57:39.098228 3333 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 23:57:39.098368 kubelet[3333]: E0123 23:57:39.098333 3333 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-5757ddb5fd-l52x5_calico-system(1e83b674-bf5a-4da7-960a-435a24e8e6d1): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:39.098571 kubelet[3333]: E0123 23:57:39.098425 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-5757ddb5fd-l52x5" podUID="1e83b674-bf5a-4da7-960a-435a24e8e6d1" Jan 23 23:57:39.546291 kubelet[3333]: I0123 23:57:39.546247 3333 scope.go:117] "RemoveContainer" containerID="3819ca16411f9640f908028776e41413ee4538d7e6447472fcf2023038829300" Jan 23 23:57:39.550032 containerd[2023]: time="2026-01-23T23:57:39.549946868Z" level=info msg="CreateContainer within sandbox \"35128c041a1f7cc37cfe1590f636ca55f882c443340e8c93d832c7b9d615c7e6\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 23 23:57:39.578141 containerd[2023]: time="2026-01-23T23:57:39.577935416Z" level=info msg="CreateContainer within sandbox \"35128c041a1f7cc37cfe1590f636ca55f882c443340e8c93d832c7b9d615c7e6\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"0e1278edfef45e3dff43faca64ee7dfc7ad21a4d82fea95e068c525ea7697e33\"" Jan 23 23:57:39.579130 containerd[2023]: time="2026-01-23T23:57:39.579074180Z" level=info msg="StartContainer for \"0e1278edfef45e3dff43faca64ee7dfc7ad21a4d82fea95e068c525ea7697e33\"" Jan 23 23:57:39.581501 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1642878039.mount: Deactivated successfully. Jan 23 23:57:39.638229 systemd[1]: Started cri-containerd-0e1278edfef45e3dff43faca64ee7dfc7ad21a4d82fea95e068c525ea7697e33.scope - libcontainer container 0e1278edfef45e3dff43faca64ee7dfc7ad21a4d82fea95e068c525ea7697e33. Jan 23 23:57:39.707475 containerd[2023]: time="2026-01-23T23:57:39.707387384Z" level=info msg="StartContainer for \"0e1278edfef45e3dff43faca64ee7dfc7ad21a4d82fea95e068c525ea7697e33\" returns successfully" Jan 23 23:57:39.846227 kubelet[3333]: E0123 23:57:39.846055 3333 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.95:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-95?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" Jan 23 23:57:41.523715 kubelet[3333]: E0123 23:57:41.523650 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58854c8f84-dfgrz" podUID="88895574-7d47-4441-9e70-eebbca18d915" Jan 23 23:57:42.527958 containerd[2023]: time="2026-01-23T23:57:42.527488198Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 23:57:42.814958 containerd[2023]: time="2026-01-23T23:57:42.814183776Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:42.816531 containerd[2023]: time="2026-01-23T23:57:42.816333204Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 23:57:42.816531 containerd[2023]: time="2026-01-23T23:57:42.816473820Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 23:57:42.816764 kubelet[3333]: E0123 23:57:42.816690 3333 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:57:42.816764 kubelet[3333]: E0123 23:57:42.816749 3333 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 23:57:42.817363 kubelet[3333]: E0123 23:57:42.816850 3333 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-bsxtr_calico-system(16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:42.819208 containerd[2023]: time="2026-01-23T23:57:42.818603304Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 23:57:43.084147 containerd[2023]: time="2026-01-23T23:57:43.083845917Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:43.086316 containerd[2023]: time="2026-01-23T23:57:43.086082597Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 23:57:43.086316 containerd[2023]: time="2026-01-23T23:57:43.086250153Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 23:57:43.086872 kubelet[3333]: E0123 23:57:43.086634 3333 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:57:43.086872 kubelet[3333]: E0123 23:57:43.086694 3333 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 23:57:43.087247 kubelet[3333]: E0123 23:57:43.086797 3333 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-bsxtr_calico-system(16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:43.087419 kubelet[3333]: E0123 23:57:43.087207 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-bsxtr" podUID="16aacbf4-be26-43d8-a2e1-8bb1a4ed82d0" Jan 23 23:57:44.525315 containerd[2023]: time="2026-01-23T23:57:44.525265824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 23:57:44.930389 containerd[2023]: time="2026-01-23T23:57:44.930201134Z" level=info msg="trying next host - response was http.StatusNotFound" host=ghcr.io Jan 23 23:57:44.932513 containerd[2023]: time="2026-01-23T23:57:44.932439350Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 23:57:44.932643 containerd[2023]: time="2026-01-23T23:57:44.932576066Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 23:57:44.932953 kubelet[3333]: E0123 23:57:44.932858 3333 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:44.933484 kubelet[3333]: E0123 23:57:44.932951 3333 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 23:57:44.933484 kubelet[3333]: E0123 23:57:44.933073 3333 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-58854c8f84-79vx7_calico-apiserver(3ac30c2d-dd8c-4060-a356-77e0062bc1c4): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 23:57:44.933484 kubelet[3333]: E0123 23:57:44.933129 3333 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-58854c8f84-79vx7" podUID="3ac30c2d-dd8c-4060-a356-77e0062bc1c4"