Jan 23 00:06:02.115210 kernel: Booting Linux on physical CPU 0x0000000000 [0x410fd083] Jan 23 00:06:02.115254 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.0 p8) 14.3.0, GNU ld (Gentoo 2.44 p4) 2.44.0) #1 SMP PREEMPT Thu Jan 22 22:21:53 -00 2026 Jan 23 00:06:02.115278 kernel: KASLR disabled due to lack of seed Jan 23 00:06:02.115294 kernel: efi: EFI v2.7 by EDK II Jan 23 00:06:02.115311 kernel: efi: SMBIOS=0x7bed0000 SMBIOS 3.0=0x7beb0000 ACPI=0x786e0000 ACPI 2.0=0x786e0014 MEMATTR=0x7a734a98 MEMRESERVE=0x78557598 Jan 23 00:06:02.115327 kernel: secureboot: Secure boot disabled Jan 23 00:06:02.115344 kernel: ACPI: Early table checksum verification disabled Jan 23 00:06:02.115359 kernel: ACPI: RSDP 0x00000000786E0014 000024 (v02 AMAZON) Jan 23 00:06:02.115374 kernel: ACPI: XSDT 0x00000000786D00E8 000064 (v01 AMAZON AMZNFACP 00000001 01000013) Jan 23 00:06:02.115390 kernel: ACPI: FACP 0x00000000786B0000 000114 (v06 AMAZON AMZNFACP 00000001 AMZN 00000001) Jan 23 00:06:02.115405 kernel: ACPI: DSDT 0x0000000078640000 0013D2 (v02 AMAZON AMZNDSDT 00000001 AMZN 00000001) Jan 23 00:06:02.115425 kernel: ACPI: FACS 0x0000000078630000 000040 Jan 23 00:06:02.115440 kernel: ACPI: APIC 0x00000000786C0000 000108 (v04 AMAZON AMZNAPIC 00000001 AMZN 00000001) Jan 23 00:06:02.115456 kernel: ACPI: SPCR 0x00000000786A0000 000050 (v02 AMAZON AMZNSPCR 00000001 AMZN 00000001) Jan 23 00:06:02.115474 kernel: ACPI: GTDT 0x0000000078690000 000060 (v02 AMAZON AMZNGTDT 00000001 AMZN 00000001) Jan 23 00:06:02.115541 kernel: ACPI: MCFG 0x0000000078680000 00003C (v02 AMAZON AMZNMCFG 00000001 AMZN 00000001) Jan 23 00:06:02.115570 kernel: ACPI: SLIT 0x0000000078670000 00002D (v01 AMAZON AMZNSLIT 00000001 AMZN 00000001) Jan 23 00:06:02.115587 kernel: ACPI: IORT 0x0000000078660000 000078 (v01 AMAZON AMZNIORT 00000001 AMZN 00000001) Jan 23 00:06:02.115603 kernel: ACPI: PPTT 0x0000000078650000 0000EC (v01 AMAZON AMZNPPTT 00000001 AMZN 00000001) Jan 23 00:06:02.115619 kernel: ACPI: SPCR: console: uart,mmio,0x90a0000,115200 Jan 23 00:06:02.115636 kernel: earlycon: uart0 at MMIO 0x00000000090a0000 (options '115200') Jan 23 00:06:02.115652 kernel: printk: legacy bootconsole [uart0] enabled Jan 23 00:06:02.115668 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 23 00:06:02.115684 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 00:06:02.115700 kernel: NODE_DATA(0) allocated [mem 0x4b584da00-0x4b5854fff] Jan 23 00:06:02.115716 kernel: Zone ranges: Jan 23 00:06:02.115732 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 23 00:06:02.115752 kernel: DMA32 empty Jan 23 00:06:02.115768 kernel: Normal [mem 0x0000000100000000-0x00000004b5ffffff] Jan 23 00:06:02.115784 kernel: Device empty Jan 23 00:06:02.115799 kernel: Movable zone start for each node Jan 23 00:06:02.115815 kernel: Early memory node ranges Jan 23 00:06:02.115830 kernel: node 0: [mem 0x0000000040000000-0x000000007862ffff] Jan 23 00:06:02.115846 kernel: node 0: [mem 0x0000000078630000-0x000000007863ffff] Jan 23 00:06:02.115861 kernel: node 0: [mem 0x0000000078640000-0x00000000786effff] Jan 23 00:06:02.115877 kernel: node 0: [mem 0x00000000786f0000-0x000000007872ffff] Jan 23 00:06:02.115893 kernel: node 0: [mem 0x0000000078730000-0x000000007bbfffff] Jan 23 00:06:02.115908 kernel: node 0: [mem 0x000000007bc00000-0x000000007bfdffff] Jan 23 00:06:02.115924 kernel: node 0: [mem 0x000000007bfe0000-0x000000007fffffff] Jan 23 00:06:02.115944 kernel: node 0: [mem 0x0000000400000000-0x00000004b5ffffff] Jan 23 00:06:02.115967 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x00000004b5ffffff] Jan 23 00:06:02.115984 kernel: On node 0, zone Normal: 8192 pages in unavailable ranges Jan 23 00:06:02.116000 kernel: cma: Reserved 16 MiB at 0x000000007f000000 on node -1 Jan 23 00:06:02.116017 kernel: psci: probing for conduit method from ACPI. Jan 23 00:06:02.116037 kernel: psci: PSCIv1.0 detected in firmware. Jan 23 00:06:02.116054 kernel: psci: Using standard PSCI v0.2 function IDs Jan 23 00:06:02.116071 kernel: psci: Trusted OS migration not required Jan 23 00:06:02.116087 kernel: psci: SMC Calling Convention v1.1 Jan 23 00:06:02.116104 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000001) Jan 23 00:06:02.116121 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 23 00:06:02.116137 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 23 00:06:02.116154 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 23 00:06:02.116171 kernel: Detected PIPT I-cache on CPU0 Jan 23 00:06:02.116188 kernel: CPU features: detected: GIC system register CPU interface Jan 23 00:06:02.116205 kernel: CPU features: detected: Spectre-v2 Jan 23 00:06:02.116225 kernel: CPU features: detected: Spectre-v3a Jan 23 00:06:02.116242 kernel: CPU features: detected: Spectre-BHB Jan 23 00:06:02.116258 kernel: CPU features: detected: ARM erratum 1742098 Jan 23 00:06:02.116275 kernel: CPU features: detected: ARM errata 1165522, 1319367, or 1530923 Jan 23 00:06:02.116291 kernel: alternatives: applying boot alternatives Jan 23 00:06:02.116310 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=38aa0560e146398cb8c3378a56d449784f1c7652139d7b61279d764fcc4c793a Jan 23 00:06:02.116328 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 23 00:06:02.116344 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 23 00:06:02.116361 kernel: Fallback order for Node 0: 0 Jan 23 00:06:02.116378 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1007616 Jan 23 00:06:02.116395 kernel: Policy zone: Normal Jan 23 00:06:02.116415 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 23 00:06:02.116431 kernel: software IO TLB: area num 2. Jan 23 00:06:02.116448 kernel: software IO TLB: mapped [mem 0x0000000074557000-0x0000000078557000] (64MB) Jan 23 00:06:02.116464 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 23 00:06:02.116481 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 23 00:06:02.116529 kernel: rcu: RCU event tracing is enabled. Jan 23 00:06:02.116548 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 23 00:06:02.116566 kernel: Trampoline variant of Tasks RCU enabled. Jan 23 00:06:02.116583 kernel: Tracing variant of Tasks RCU enabled. Jan 23 00:06:02.116600 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 23 00:06:02.116617 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 23 00:06:02.116639 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 00:06:02.116656 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 23 00:06:02.116673 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 23 00:06:02.116689 kernel: GICv3: 96 SPIs implemented Jan 23 00:06:02.116706 kernel: GICv3: 0 Extended SPIs implemented Jan 23 00:06:02.116722 kernel: Root IRQ handler: gic_handle_irq Jan 23 00:06:02.116738 kernel: GICv3: GICv3 features: 16 PPIs Jan 23 00:06:02.116755 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 23 00:06:02.116772 kernel: GICv3: CPU0: found redistributor 0 region 0:0x0000000010200000 Jan 23 00:06:02.116788 kernel: ITS [mem 0x10080000-0x1009ffff] Jan 23 00:06:02.116805 kernel: ITS@0x0000000010080000: allocated 8192 Devices @4000f0000 (indirect, esz 8, psz 64K, shr 1) Jan 23 00:06:02.116822 kernel: ITS@0x0000000010080000: allocated 8192 Interrupt Collections @400100000 (flat, esz 8, psz 64K, shr 1) Jan 23 00:06:02.116843 kernel: GICv3: using LPI property table @0x0000000400110000 Jan 23 00:06:02.116859 kernel: ITS: Using hypervisor restricted LPI range [128] Jan 23 00:06:02.116876 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000400120000 Jan 23 00:06:02.116893 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 23 00:06:02.116909 kernel: arch_timer: cp15 timer(s) running at 83.33MHz (virt). Jan 23 00:06:02.116926 kernel: clocksource: arch_sys_counter: mask: 0x1ffffffffffffff max_cycles: 0x13381ebeec, max_idle_ns: 440795203145 ns Jan 23 00:06:02.116943 kernel: sched_clock: 57 bits at 83MHz, resolution 12ns, wraps every 4398046511100ns Jan 23 00:06:02.116960 kernel: Console: colour dummy device 80x25 Jan 23 00:06:02.116977 kernel: printk: legacy console [tty1] enabled Jan 23 00:06:02.116994 kernel: ACPI: Core revision 20240827 Jan 23 00:06:02.117012 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 166.66 BogoMIPS (lpj=83333) Jan 23 00:06:02.117033 kernel: pid_max: default: 32768 minimum: 301 Jan 23 00:06:02.117050 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 23 00:06:02.117068 kernel: landlock: Up and running. Jan 23 00:06:02.117085 kernel: SELinux: Initializing. Jan 23 00:06:02.117102 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 00:06:02.117119 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 23 00:06:02.117136 kernel: rcu: Hierarchical SRCU implementation. Jan 23 00:06:02.117153 kernel: rcu: Max phase no-delay instances is 400. Jan 23 00:06:02.117170 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 23 00:06:02.117191 kernel: Remapping and enabling EFI services. Jan 23 00:06:02.117208 kernel: smp: Bringing up secondary CPUs ... Jan 23 00:06:02.117225 kernel: Detected PIPT I-cache on CPU1 Jan 23 00:06:02.117242 kernel: GICv3: CPU1: found redistributor 1 region 0:0x0000000010220000 Jan 23 00:06:02.117259 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000400130000 Jan 23 00:06:02.117276 kernel: CPU1: Booted secondary processor 0x0000000001 [0x410fd083] Jan 23 00:06:02.117294 kernel: smp: Brought up 1 node, 2 CPUs Jan 23 00:06:02.117311 kernel: SMP: Total of 2 processors activated. Jan 23 00:06:02.117328 kernel: CPU: All CPU(s) started at EL1 Jan 23 00:06:02.117358 kernel: CPU features: detected: 32-bit EL0 Support Jan 23 00:06:02.117376 kernel: CPU features: detected: 32-bit EL1 Support Jan 23 00:06:02.117397 kernel: CPU features: detected: CRC32 instructions Jan 23 00:06:02.117415 kernel: alternatives: applying system-wide alternatives Jan 23 00:06:02.117433 kernel: Memory: 3796332K/4030464K available (11200K kernel code, 2458K rwdata, 9088K rodata, 39552K init, 1038K bss, 212788K reserved, 16384K cma-reserved) Jan 23 00:06:02.117451 kernel: devtmpfs: initialized Jan 23 00:06:02.117469 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 23 00:06:02.117991 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 23 00:06:02.118023 kernel: 16880 pages in range for non-PLT usage Jan 23 00:06:02.118042 kernel: 508400 pages in range for PLT usage Jan 23 00:06:02.118060 kernel: pinctrl core: initialized pinctrl subsystem Jan 23 00:06:02.118078 kernel: SMBIOS 3.0.0 present. Jan 23 00:06:02.118095 kernel: DMI: Amazon EC2 a1.large/, BIOS 1.0 11/1/2018 Jan 23 00:06:02.118113 kernel: DMI: Memory slots populated: 0/0 Jan 23 00:06:02.118131 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 23 00:06:02.118149 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 23 00:06:02.118175 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 23 00:06:02.118194 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 23 00:06:02.118212 kernel: audit: initializing netlink subsys (disabled) Jan 23 00:06:02.118230 kernel: audit: type=2000 audit(0.226:1): state=initialized audit_enabled=0 res=1 Jan 23 00:06:02.118247 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 23 00:06:02.118265 kernel: cpuidle: using governor menu Jan 23 00:06:02.118283 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 23 00:06:02.118301 kernel: ASID allocator initialised with 65536 entries Jan 23 00:06:02.118319 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 23 00:06:02.118340 kernel: Serial: AMBA PL011 UART driver Jan 23 00:06:02.118359 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 23 00:06:02.118377 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 23 00:06:02.118395 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 23 00:06:02.118413 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 23 00:06:02.118431 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 23 00:06:02.118448 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 23 00:06:02.118466 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 23 00:06:02.118484 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 23 00:06:02.118550 kernel: ACPI: Added _OSI(Module Device) Jan 23 00:06:02.118569 kernel: ACPI: Added _OSI(Processor Device) Jan 23 00:06:02.118587 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 23 00:06:02.118604 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 23 00:06:02.118622 kernel: ACPI: Interpreter enabled Jan 23 00:06:02.118640 kernel: ACPI: Using GIC for interrupt routing Jan 23 00:06:02.118657 kernel: ACPI: MCFG table detected, 1 entries Jan 23 00:06:02.118675 kernel: ACPI: CPU0 has been hot-added Jan 23 00:06:02.118694 kernel: ACPI: CPU1 has been hot-added Jan 23 00:06:02.118716 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00]) Jan 23 00:06:02.118997 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 23 00:06:02.119185 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 23 00:06:02.119368 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 23 00:06:02.119579 kernel: acpi PNP0A08:00: ECAM area [mem 0x20000000-0x200fffff] reserved by PNP0C02:00 Jan 23 00:06:02.119764 kernel: acpi PNP0A08:00: ECAM at [mem 0x20000000-0x200fffff] for [bus 00] Jan 23 00:06:02.119789 kernel: ACPI: Remapped I/O 0x000000001fff0000 to [io 0x0000-0xffff window] Jan 23 00:06:02.119814 kernel: acpiphp: Slot [1] registered Jan 23 00:06:02.119832 kernel: acpiphp: Slot [2] registered Jan 23 00:06:02.119850 kernel: acpiphp: Slot [3] registered Jan 23 00:06:02.119867 kernel: acpiphp: Slot [4] registered Jan 23 00:06:02.119885 kernel: acpiphp: Slot [5] registered Jan 23 00:06:02.119902 kernel: acpiphp: Slot [6] registered Jan 23 00:06:02.119920 kernel: acpiphp: Slot [7] registered Jan 23 00:06:02.119938 kernel: acpiphp: Slot [8] registered Jan 23 00:06:02.119955 kernel: acpiphp: Slot [9] registered Jan 23 00:06:02.119973 kernel: acpiphp: Slot [10] registered Jan 23 00:06:02.119995 kernel: acpiphp: Slot [11] registered Jan 23 00:06:02.120013 kernel: acpiphp: Slot [12] registered Jan 23 00:06:02.120030 kernel: acpiphp: Slot [13] registered Jan 23 00:06:02.120048 kernel: acpiphp: Slot [14] registered Jan 23 00:06:02.120066 kernel: acpiphp: Slot [15] registered Jan 23 00:06:02.120083 kernel: acpiphp: Slot [16] registered Jan 23 00:06:02.120101 kernel: acpiphp: Slot [17] registered Jan 23 00:06:02.120119 kernel: acpiphp: Slot [18] registered Jan 23 00:06:02.120137 kernel: acpiphp: Slot [19] registered Jan 23 00:06:02.120159 kernel: acpiphp: Slot [20] registered Jan 23 00:06:02.120177 kernel: acpiphp: Slot [21] registered Jan 23 00:06:02.120195 kernel: acpiphp: Slot [22] registered Jan 23 00:06:02.120213 kernel: acpiphp: Slot [23] registered Jan 23 00:06:02.120231 kernel: acpiphp: Slot [24] registered Jan 23 00:06:02.120249 kernel: acpiphp: Slot [25] registered Jan 23 00:06:02.120267 kernel: acpiphp: Slot [26] registered Jan 23 00:06:02.120285 kernel: acpiphp: Slot [27] registered Jan 23 00:06:02.120302 kernel: acpiphp: Slot [28] registered Jan 23 00:06:02.120320 kernel: acpiphp: Slot [29] registered Jan 23 00:06:02.120342 kernel: acpiphp: Slot [30] registered Jan 23 00:06:02.120360 kernel: acpiphp: Slot [31] registered Jan 23 00:06:02.120377 kernel: PCI host bridge to bus 0000:00 Jan 23 00:06:02.120597 kernel: pci_bus 0000:00: root bus resource [mem 0x80000000-0xffffffff window] Jan 23 00:06:02.120770 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 23 00:06:02.120937 kernel: pci_bus 0000:00: root bus resource [mem 0x400000000000-0x407fffffffff window] Jan 23 00:06:02.121101 kernel: pci_bus 0000:00: root bus resource [bus 00] Jan 23 00:06:02.121331 kernel: pci 0000:00:00.0: [1d0f:0200] type 00 class 0x060000 conventional PCI endpoint Jan 23 00:06:02.121585 kernel: pci 0000:00:01.0: [1d0f:8250] type 00 class 0x070003 conventional PCI endpoint Jan 23 00:06:02.121789 kernel: pci 0000:00:01.0: BAR 0 [mem 0x80118000-0x80118fff] Jan 23 00:06:02.123707 kernel: pci 0000:00:04.0: [1d0f:8061] type 00 class 0x010802 PCIe Root Complex Integrated Endpoint Jan 23 00:06:02.123920 kernel: pci 0000:00:04.0: BAR 0 [mem 0x80114000-0x80117fff] Jan 23 00:06:02.124112 kernel: pci 0000:00:04.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 00:06:02.124333 kernel: pci 0000:00:05.0: [1d0f:ec20] type 00 class 0x020000 PCIe Root Complex Integrated Endpoint Jan 23 00:06:02.124556 kernel: pci 0000:00:05.0: BAR 0 [mem 0x80110000-0x80113fff] Jan 23 00:06:02.124759 kernel: pci 0000:00:05.0: BAR 2 [mem 0x80000000-0x800fffff pref] Jan 23 00:06:02.124956 kernel: pci 0000:00:05.0: BAR 4 [mem 0x80100000-0x8010ffff] Jan 23 00:06:02.125154 kernel: pci 0000:00:05.0: PME# supported from D0 D1 D2 D3hot D3cold Jan 23 00:06:02.125344 kernel: pci_bus 0000:00: resource 4 [mem 0x80000000-0xffffffff window] Jan 23 00:06:02.126132 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 23 00:06:02.127774 kernel: pci_bus 0000:00: resource 6 [mem 0x400000000000-0x407fffffffff window] Jan 23 00:06:02.127801 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 23 00:06:02.127820 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 23 00:06:02.127838 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 23 00:06:02.127857 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 23 00:06:02.127875 kernel: iommu: Default domain type: Translated Jan 23 00:06:02.127892 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 23 00:06:02.127910 kernel: efivars: Registered efivars operations Jan 23 00:06:02.127928 kernel: vgaarb: loaded Jan 23 00:06:02.127951 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 23 00:06:02.127969 kernel: VFS: Disk quotas dquot_6.6.0 Jan 23 00:06:02.127987 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 23 00:06:02.128005 kernel: pnp: PnP ACPI init Jan 23 00:06:02.128206 kernel: system 00:00: [mem 0x20000000-0x2fffffff] could not be reserved Jan 23 00:06:02.128232 kernel: pnp: PnP ACPI: found 1 devices Jan 23 00:06:02.128250 kernel: NET: Registered PF_INET protocol family Jan 23 00:06:02.128269 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 23 00:06:02.128292 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 23 00:06:02.128310 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 23 00:06:02.128328 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 23 00:06:02.128346 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 23 00:06:02.128364 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 23 00:06:02.128382 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 00:06:02.128400 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 23 00:06:02.128418 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 23 00:06:02.128436 kernel: PCI: CLS 0 bytes, default 64 Jan 23 00:06:02.128457 kernel: kvm [1]: HYP mode not available Jan 23 00:06:02.128475 kernel: Initialise system trusted keyrings Jan 23 00:06:02.128515 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 23 00:06:02.128537 kernel: Key type asymmetric registered Jan 23 00:06:02.128555 kernel: Asymmetric key parser 'x509' registered Jan 23 00:06:02.128574 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 23 00:06:02.128592 kernel: io scheduler mq-deadline registered Jan 23 00:06:02.128611 kernel: io scheduler kyber registered Jan 23 00:06:02.128726 kernel: io scheduler bfq registered Jan 23 00:06:02.128977 kernel: pl061_gpio ARMH0061:00: PL061 GPIO chip registered Jan 23 00:06:02.129005 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 23 00:06:02.129023 kernel: ACPI: button: Power Button [PWRB] Jan 23 00:06:02.129041 kernel: input: Sleep Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0E:00/input/input1 Jan 23 00:06:02.129059 kernel: ACPI: button: Sleep Button [SLPB] Jan 23 00:06:02.129077 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 23 00:06:02.129096 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 23 00:06:02.129290 kernel: serial 0000:00:01.0: enabling device (0010 -> 0012) Jan 23 00:06:02.129319 kernel: printk: legacy console [ttyS0] disabled Jan 23 00:06:02.129338 kernel: 0000:00:01.0: ttyS0 at MMIO 0x80118000 (irq = 14, base_baud = 115200) is a 16550A Jan 23 00:06:02.129356 kernel: printk: legacy console [ttyS0] enabled Jan 23 00:06:02.129374 kernel: printk: legacy bootconsole [uart0] disabled Jan 23 00:06:02.129392 kernel: thunder_xcv, ver 1.0 Jan 23 00:06:02.129409 kernel: thunder_bgx, ver 1.0 Jan 23 00:06:02.129427 kernel: nicpf, ver 1.0 Jan 23 00:06:02.129445 kernel: nicvf, ver 1.0 Jan 23 00:06:02.131312 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 23 00:06:02.131561 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-23T00:06:01 UTC (1769126761) Jan 23 00:06:02.131589 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 23 00:06:02.131608 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 3 (0,80000003) counters available Jan 23 00:06:02.131626 kernel: NET: Registered PF_INET6 protocol family Jan 23 00:06:02.131644 kernel: watchdog: NMI not fully supported Jan 23 00:06:02.131663 kernel: watchdog: Hard watchdog permanently disabled Jan 23 00:06:02.131681 kernel: Segment Routing with IPv6 Jan 23 00:06:02.131699 kernel: In-situ OAM (IOAM) with IPv6 Jan 23 00:06:02.131717 kernel: NET: Registered PF_PACKET protocol family Jan 23 00:06:02.131743 kernel: Key type dns_resolver registered Jan 23 00:06:02.131761 kernel: registered taskstats version 1 Jan 23 00:06:02.131779 kernel: Loading compiled-in X.509 certificates Jan 23 00:06:02.131798 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: 380753d9165686712e58c1d21e00c0268e70f18f' Jan 23 00:06:02.131816 kernel: Demotion targets for Node 0: null Jan 23 00:06:02.131833 kernel: Key type .fscrypt registered Jan 23 00:06:02.131851 kernel: Key type fscrypt-provisioning registered Jan 23 00:06:02.131869 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 23 00:06:02.131888 kernel: ima: Allocated hash algorithm: sha1 Jan 23 00:06:02.131912 kernel: ima: No architecture policies found Jan 23 00:06:02.131930 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 23 00:06:02.131949 kernel: clk: Disabling unused clocks Jan 23 00:06:02.131967 kernel: PM: genpd: Disabling unused power domains Jan 23 00:06:02.131985 kernel: Warning: unable to open an initial console. Jan 23 00:06:02.132005 kernel: Freeing unused kernel memory: 39552K Jan 23 00:06:02.132023 kernel: Run /init as init process Jan 23 00:06:02.132042 kernel: with arguments: Jan 23 00:06:02.132061 kernel: /init Jan 23 00:06:02.132084 kernel: with environment: Jan 23 00:06:02.132102 kernel: HOME=/ Jan 23 00:06:02.132120 kernel: TERM=linux Jan 23 00:06:02.132140 systemd[1]: Successfully made /usr/ read-only. Jan 23 00:06:02.132165 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 00:06:02.132185 systemd[1]: Detected virtualization amazon. Jan 23 00:06:02.132204 systemd[1]: Detected architecture arm64. Jan 23 00:06:02.132227 systemd[1]: Running in initrd. Jan 23 00:06:02.132246 systemd[1]: No hostname configured, using default hostname. Jan 23 00:06:02.132266 systemd[1]: Hostname set to . Jan 23 00:06:02.132285 systemd[1]: Initializing machine ID from VM UUID. Jan 23 00:06:02.132304 systemd[1]: Queued start job for default target initrd.target. Jan 23 00:06:02.132323 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 00:06:02.132343 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 00:06:02.132364 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 23 00:06:02.132388 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 00:06:02.132408 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 23 00:06:02.132428 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 23 00:06:02.132450 systemd[1]: Expecting device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - /dev/disk/by-partuuid/7130c94a-213a-4e5a-8e26-6cce9662f132... Jan 23 00:06:02.132470 systemd[1]: Expecting device dev-mapper-usr.device - /dev/mapper/usr... Jan 23 00:06:02.133519 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 00:06:02.133559 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 00:06:02.133587 systemd[1]: Reached target paths.target - Path Units. Jan 23 00:06:02.133607 systemd[1]: Reached target slices.target - Slice Units. Jan 23 00:06:02.133626 systemd[1]: Reached target swap.target - Swaps. Jan 23 00:06:02.133646 systemd[1]: Reached target timers.target - Timer Units. Jan 23 00:06:02.133665 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 00:06:02.133684 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 00:06:02.133704 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 23 00:06:02.133723 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 23 00:06:02.133742 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 00:06:02.133766 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 00:06:02.133785 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 00:06:02.133804 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 00:06:02.133824 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 23 00:06:02.133864 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 00:06:02.133885 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 23 00:06:02.133906 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 23 00:06:02.133925 systemd[1]: Starting systemd-fsck-usr.service... Jan 23 00:06:02.133949 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 00:06:02.133969 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 00:06:02.133988 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:06:02.134007 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 23 00:06:02.134028 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 00:06:02.134100 systemd-journald[256]: Collecting audit messages is disabled. Jan 23 00:06:02.134144 systemd[1]: Finished systemd-fsck-usr.service. Jan 23 00:06:02.134165 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 23 00:06:02.134185 systemd-journald[256]: Journal started Jan 23 00:06:02.134226 systemd-journald[256]: Runtime Journal (/run/log/journal/ec2fac5b7f3ec7932b3dbb8c05ad2e78) is 8M, max 75.3M, 67.3M free. Jan 23 00:06:02.104805 systemd-modules-load[259]: Inserted module 'overlay' Jan 23 00:06:02.149006 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 00:06:02.149076 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 23 00:06:02.151561 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:06:02.160675 kernel: Bridge firewalling registered Jan 23 00:06:02.153456 systemd-modules-load[259]: Inserted module 'br_netfilter' Jan 23 00:06:02.155699 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 00:06:02.172532 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 23 00:06:02.185731 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 00:06:02.200412 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 00:06:02.206358 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 23 00:06:02.219776 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 00:06:02.250877 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 00:06:02.254035 systemd-tmpfiles[280]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 23 00:06:02.265581 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:06:02.271237 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 23 00:06:02.277031 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 00:06:02.289401 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 00:06:02.298757 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 00:06:02.331669 dracut-cmdline[299]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=tty1 console=ttyS0,115200n8 earlycon flatcar.first_boot=detected acpi=force flatcar.oem.id=ec2 modprobe.blacklist=xen_fbfront net.ifnames=0 nvme_core.io_timeout=4294967295 verity.usrhash=38aa0560e146398cb8c3378a56d449784f1c7652139d7b61279d764fcc4c793a Jan 23 00:06:02.398168 systemd-resolved[301]: Positive Trust Anchors: Jan 23 00:06:02.399468 systemd-resolved[301]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 00:06:02.400233 systemd-resolved[301]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 00:06:02.500528 kernel: SCSI subsystem initialized Jan 23 00:06:02.508556 kernel: Loading iSCSI transport class v2.0-870. Jan 23 00:06:02.521613 kernel: iscsi: registered transport (tcp) Jan 23 00:06:02.543003 kernel: iscsi: registered transport (qla4xxx) Jan 23 00:06:02.543089 kernel: QLogic iSCSI HBA Driver Jan 23 00:06:02.579685 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 00:06:02.618553 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 00:06:02.627350 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 00:06:02.676560 kernel: random: crng init done Jan 23 00:06:02.676796 systemd-resolved[301]: Defaulting to hostname 'linux'. Jan 23 00:06:02.680695 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 00:06:02.685505 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 00:06:02.723338 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 23 00:06:02.729576 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 23 00:06:02.816554 kernel: raid6: neonx8 gen() 6518 MB/s Jan 23 00:06:02.833529 kernel: raid6: neonx4 gen() 6565 MB/s Jan 23 00:06:02.850528 kernel: raid6: neonx2 gen() 5452 MB/s Jan 23 00:06:02.867527 kernel: raid6: neonx1 gen() 3958 MB/s Jan 23 00:06:02.884527 kernel: raid6: int64x8 gen() 3669 MB/s Jan 23 00:06:02.901529 kernel: raid6: int64x4 gen() 3666 MB/s Jan 23 00:06:02.918526 kernel: raid6: int64x2 gen() 3612 MB/s Jan 23 00:06:02.936604 kernel: raid6: int64x1 gen() 2768 MB/s Jan 23 00:06:02.936645 kernel: raid6: using algorithm neonx4 gen() 6565 MB/s Jan 23 00:06:02.955530 kernel: raid6: .... xor() 4843 MB/s, rmw enabled Jan 23 00:06:02.955576 kernel: raid6: using neon recovery algorithm Jan 23 00:06:02.964229 kernel: xor: measuring software checksum speed Jan 23 00:06:02.964283 kernel: 8regs : 12917 MB/sec Jan 23 00:06:02.965507 kernel: 32regs : 13043 MB/sec Jan 23 00:06:02.967894 kernel: arm64_neon : 8646 MB/sec Jan 23 00:06:02.967927 kernel: xor: using function: 32regs (13043 MB/sec) Jan 23 00:06:03.058541 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 23 00:06:03.070315 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 23 00:06:03.081130 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 00:06:03.134711 systemd-udevd[510]: Using default interface naming scheme 'v255'. Jan 23 00:06:03.145620 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 00:06:03.163098 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 23 00:06:03.204968 dracut-pre-trigger[520]: rd.md=0: removing MD RAID activation Jan 23 00:06:03.250906 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 00:06:03.259726 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 00:06:03.405713 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 00:06:03.412388 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 23 00:06:03.552051 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 23 00:06:03.552143 kernel: nvme nvme0: pci function 0000:00:04.0 Jan 23 00:06:03.559263 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 23 00:06:03.559345 kernel: ena 0000:00:05.0: enabling device (0010 -> 0012) Jan 23 00:06:03.566537 kernel: nvme nvme0: 2/0/0 default/read/poll queues Jan 23 00:06:03.569156 kernel: ena 0000:00:05.0: ENA device version: 0.10 Jan 23 00:06:03.569442 kernel: ena 0000:00:05.0: ENA controller version: 0.0.1 implementation version 1 Jan 23 00:06:03.574689 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 23 00:06:03.576353 kernel: GPT:9289727 != 33554431 Jan 23 00:06:03.576402 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 23 00:06:03.580068 kernel: GPT:9289727 != 33554431 Jan 23 00:06:03.580129 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 23 00:06:03.582854 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 00:06:03.592550 kernel: ena 0000:00:05.0: Elastic Network Adapter (ENA) found at mem 80110000, mac addr 06:e6:ac:36:09:41 Jan 23 00:06:03.601990 (udev-worker)[578]: Network interface NamePolicy= disabled on kernel command line. Jan 23 00:06:03.612755 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 00:06:03.613044 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:06:03.622822 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:06:03.630376 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:06:03.637178 systemd[1]: run-credentials-systemd\x2dvconsole\x2dsetup.service.mount: Deactivated successfully. Jan 23 00:06:03.676542 kernel: nvme nvme0: using unchecked data buffer Jan 23 00:06:03.683908 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:06:03.771631 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - Amazon Elastic Block Store EFI-SYSTEM. Jan 23 00:06:03.912820 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 23 00:06:03.934389 systemd[1]: Found device dev-disk-by\x2dpartuuid-7130c94a\x2d213a\x2d4e5a\x2d8e26\x2d6cce9662f132.device - Amazon Elastic Block Store USR-A. Jan 23 00:06:03.941452 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - Amazon Elastic Block Store USR-A. Jan 23 00:06:03.966864 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - Amazon Elastic Block Store ROOT. Jan 23 00:06:03.993530 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 00:06:03.999665 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 00:06:04.002657 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 00:06:04.011226 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 00:06:04.017391 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 23 00:06:04.022108 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 23 00:06:04.047345 disk-uuid[689]: Primary Header is updated. Jan 23 00:06:04.047345 disk-uuid[689]: Secondary Entries is updated. Jan 23 00:06:04.047345 disk-uuid[689]: Secondary Header is updated. Jan 23 00:06:04.058656 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 00:06:04.067595 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 23 00:06:05.081619 kernel: nvme0n1: p1 p2 p3 p4 p6 p7 p9 Jan 23 00:06:05.082616 disk-uuid[692]: The operation has completed successfully. Jan 23 00:06:05.290143 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 23 00:06:05.292670 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 23 00:06:05.356233 systemd[1]: Starting verity-setup.service - Verity Setup for /dev/mapper/usr... Jan 23 00:06:05.393849 sh[957]: Success Jan 23 00:06:05.424166 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 23 00:06:05.424305 kernel: device-mapper: uevent: version 1.0.3 Jan 23 00:06:05.424335 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 23 00:06:05.438521 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 23 00:06:05.531778 systemd[1]: Found device dev-mapper-usr.device - /dev/mapper/usr. Jan 23 00:06:05.539148 systemd[1]: Mounting sysusr-usr.mount - /sysusr/usr... Jan 23 00:06:05.562232 systemd[1]: Finished verity-setup.service - Verity Setup for /dev/mapper/usr. Jan 23 00:06:05.584547 kernel: BTRFS: device fsid 97a43946-ed04-45c1-a355-c0350e8b973e devid 1 transid 38 /dev/mapper/usr (254:0) scanned by mount (980) Jan 23 00:06:05.589134 kernel: BTRFS info (device dm-0): first mount of filesystem 97a43946-ed04-45c1-a355-c0350e8b973e Jan 23 00:06:05.589185 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 23 00:06:05.723708 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 23 00:06:05.723778 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 23 00:06:05.723804 kernel: BTRFS info (device dm-0): enabling free space tree Jan 23 00:06:05.741322 systemd[1]: Mounted sysusr-usr.mount - /sysusr/usr. Jan 23 00:06:05.748272 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 23 00:06:05.752657 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 23 00:06:05.753945 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 23 00:06:05.758778 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 23 00:06:05.818589 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1014) Jan 23 00:06:05.824402 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e9ae44b3-0aec-43ca-ad8b-9cf4e242132f Jan 23 00:06:05.824483 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 00:06:05.833850 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 00:06:05.833924 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 00:06:05.843598 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e9ae44b3-0aec-43ca-ad8b-9cf4e242132f Jan 23 00:06:05.846122 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 23 00:06:05.855840 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 23 00:06:05.950441 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 00:06:05.959593 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 00:06:06.029873 systemd-networkd[1149]: lo: Link UP Jan 23 00:06:06.030332 systemd-networkd[1149]: lo: Gained carrier Jan 23 00:06:06.033605 systemd-networkd[1149]: Enumeration completed Jan 23 00:06:06.034553 systemd-networkd[1149]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:06:06.034560 systemd-networkd[1149]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 00:06:06.044811 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 00:06:06.049528 systemd[1]: Reached target network.target - Network. Jan 23 00:06:06.055278 systemd-networkd[1149]: eth0: Link UP Jan 23 00:06:06.055292 systemd-networkd[1149]: eth0: Gained carrier Jan 23 00:06:06.055315 systemd-networkd[1149]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:06:06.075571 systemd-networkd[1149]: eth0: DHCPv4 address 172.31.18.130/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 00:06:06.426960 ignition[1072]: Ignition 2.22.0 Jan 23 00:06:06.426985 ignition[1072]: Stage: fetch-offline Jan 23 00:06:06.428339 ignition[1072]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:06:06.428362 ignition[1072]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 00:06:06.429110 ignition[1072]: Ignition finished successfully Jan 23 00:06:06.439538 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 00:06:06.446348 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 23 00:06:06.488766 ignition[1160]: Ignition 2.22.0 Jan 23 00:06:06.488804 ignition[1160]: Stage: fetch Jan 23 00:06:06.489365 ignition[1160]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:06:06.489388 ignition[1160]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 00:06:06.489583 ignition[1160]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 00:06:06.510305 ignition[1160]: PUT result: OK Jan 23 00:06:06.514031 ignition[1160]: parsed url from cmdline: "" Jan 23 00:06:06.514184 ignition[1160]: no config URL provided Jan 23 00:06:06.514267 ignition[1160]: reading system config file "/usr/lib/ignition/user.ign" Jan 23 00:06:06.514294 ignition[1160]: no config at "/usr/lib/ignition/user.ign" Jan 23 00:06:06.516561 ignition[1160]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 00:06:06.523474 ignition[1160]: PUT result: OK Jan 23 00:06:06.523578 ignition[1160]: GET http://169.254.169.254/2019-10-01/user-data: attempt #1 Jan 23 00:06:06.532536 ignition[1160]: GET result: OK Jan 23 00:06:06.532737 ignition[1160]: parsing config with SHA512: 15d6af744dc049af7dc664bbf33df4d02c58c950e8d30f7b51adee2a2fa1810d742a2c28b9d74c92d898f3b3243083d14ea761e05401faa5f4e076974c29e45b Jan 23 00:06:06.547281 unknown[1160]: fetched base config from "system" Jan 23 00:06:06.547310 unknown[1160]: fetched base config from "system" Jan 23 00:06:06.548021 ignition[1160]: fetch: fetch complete Jan 23 00:06:06.547322 unknown[1160]: fetched user config from "aws" Jan 23 00:06:06.548033 ignition[1160]: fetch: fetch passed Jan 23 00:06:06.548123 ignition[1160]: Ignition finished successfully Jan 23 00:06:06.562553 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 23 00:06:06.568752 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 23 00:06:06.621137 ignition[1167]: Ignition 2.22.0 Jan 23 00:06:06.621166 ignition[1167]: Stage: kargs Jan 23 00:06:06.621796 ignition[1167]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:06:06.621832 ignition[1167]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 00:06:06.621980 ignition[1167]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 00:06:06.627682 ignition[1167]: PUT result: OK Jan 23 00:06:06.645230 ignition[1167]: kargs: kargs passed Jan 23 00:06:06.645336 ignition[1167]: Ignition finished successfully Jan 23 00:06:06.651670 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 23 00:06:06.657651 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 23 00:06:06.709075 ignition[1173]: Ignition 2.22.0 Jan 23 00:06:06.709599 ignition[1173]: Stage: disks Jan 23 00:06:06.710159 ignition[1173]: no configs at "/usr/lib/ignition/base.d" Jan 23 00:06:06.710182 ignition[1173]: no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 00:06:06.710307 ignition[1173]: PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 00:06:06.714658 ignition[1173]: PUT result: OK Jan 23 00:06:06.724163 ignition[1173]: disks: disks passed Jan 23 00:06:06.724450 ignition[1173]: Ignition finished successfully Jan 23 00:06:06.733583 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 23 00:06:06.740142 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 23 00:06:06.745185 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 23 00:06:06.748522 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 00:06:06.753327 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 00:06:06.755864 systemd[1]: Reached target basic.target - Basic System. Jan 23 00:06:06.766713 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 23 00:06:06.823621 systemd-fsck[1181]: ROOT: clean, 15/553520 files, 52789/553472 blocks Jan 23 00:06:06.827543 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 23 00:06:06.834096 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 23 00:06:06.972514 kernel: EXT4-fs (nvme0n1p9): mounted filesystem f31390ab-27e9-47d9-a374-053913301d53 r/w with ordered data mode. Quota mode: none. Jan 23 00:06:06.972692 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 23 00:06:06.977044 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 23 00:06:06.986113 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 00:06:06.990004 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 23 00:06:06.997139 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 23 00:06:06.997240 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 23 00:06:06.997293 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 00:06:07.021440 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 23 00:06:07.027898 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 23 00:06:07.045547 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1200) Jan 23 00:06:07.050131 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e9ae44b3-0aec-43ca-ad8b-9cf4e242132f Jan 23 00:06:07.050194 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 00:06:07.057475 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 00:06:07.057550 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 00:06:07.061105 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 00:06:07.329812 systemd-networkd[1149]: eth0: Gained IPv6LL Jan 23 00:06:07.372518 initrd-setup-root[1224]: cut: /sysroot/etc/passwd: No such file or directory Jan 23 00:06:07.382361 initrd-setup-root[1231]: cut: /sysroot/etc/group: No such file or directory Jan 23 00:06:07.391011 initrd-setup-root[1238]: cut: /sysroot/etc/shadow: No such file or directory Jan 23 00:06:07.400026 initrd-setup-root[1245]: cut: /sysroot/etc/gshadow: No such file or directory Jan 23 00:06:07.701214 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 23 00:06:07.707524 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 23 00:06:07.716824 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 23 00:06:07.741193 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 23 00:06:07.748516 kernel: BTRFS info (device nvme0n1p6): last unmount of filesystem e9ae44b3-0aec-43ca-ad8b-9cf4e242132f Jan 23 00:06:07.775030 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 23 00:06:07.801829 ignition[1319]: INFO : Ignition 2.22.0 Jan 23 00:06:07.803925 ignition[1319]: INFO : Stage: mount Jan 23 00:06:07.803925 ignition[1319]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 00:06:07.803925 ignition[1319]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 00:06:07.803925 ignition[1319]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 00:06:07.813994 ignition[1319]: INFO : PUT result: OK Jan 23 00:06:07.820774 ignition[1319]: INFO : mount: mount passed Jan 23 00:06:07.822581 ignition[1319]: INFO : Ignition finished successfully Jan 23 00:06:07.829575 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 23 00:06:07.837365 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 23 00:06:07.975812 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 23 00:06:08.028536 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/nvme0n1p6 (259:5) scanned by mount (1329) Jan 23 00:06:08.033569 kernel: BTRFS info (device nvme0n1p6): first mount of filesystem e9ae44b3-0aec-43ca-ad8b-9cf4e242132f Jan 23 00:06:08.033632 kernel: BTRFS info (device nvme0n1p6): using crc32c (crc32c-generic) checksum algorithm Jan 23 00:06:08.041560 kernel: BTRFS info (device nvme0n1p6): enabling ssd optimizations Jan 23 00:06:08.041669 kernel: BTRFS info (device nvme0n1p6): enabling free space tree Jan 23 00:06:08.045102 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 23 00:06:08.102350 ignition[1345]: INFO : Ignition 2.22.0 Jan 23 00:06:08.102350 ignition[1345]: INFO : Stage: files Jan 23 00:06:08.106134 ignition[1345]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 00:06:08.106134 ignition[1345]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 00:06:08.111208 ignition[1345]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 00:06:08.114696 ignition[1345]: INFO : PUT result: OK Jan 23 00:06:08.119506 ignition[1345]: DEBUG : files: compiled without relabeling support, skipping Jan 23 00:06:08.135105 ignition[1345]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 23 00:06:08.135105 ignition[1345]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 23 00:06:08.145874 ignition[1345]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 23 00:06:08.150142 ignition[1345]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 23 00:06:08.153921 unknown[1345]: wrote ssh authorized keys file for user: core Jan 23 00:06:08.156521 ignition[1345]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 23 00:06:08.162266 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 00:06:08.162266 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 23 00:06:08.249261 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 23 00:06:08.452883 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 23 00:06:08.457273 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 23 00:06:08.461124 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 23 00:06:08.461124 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 23 00:06:08.468841 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 23 00:06:08.472874 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 00:06:08.476735 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 23 00:06:08.476735 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 00:06:08.476735 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 23 00:06:08.493880 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 00:06:08.497802 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 23 00:06:08.497802 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 00:06:08.509972 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 00:06:08.516252 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 00:06:08.516252 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.34.1-arm64.raw: attempt #1 Jan 23 00:06:08.972055 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 23 00:06:09.500875 ignition[1345]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.34.1-arm64.raw" Jan 23 00:06:09.500875 ignition[1345]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 23 00:06:09.529416 ignition[1345]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 00:06:09.537330 ignition[1345]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 23 00:06:09.537330 ignition[1345]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 23 00:06:09.537330 ignition[1345]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 23 00:06:09.537330 ignition[1345]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 23 00:06:09.551874 ignition[1345]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 23 00:06:09.551874 ignition[1345]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 23 00:06:09.551874 ignition[1345]: INFO : files: files passed Jan 23 00:06:09.551874 ignition[1345]: INFO : Ignition finished successfully Jan 23 00:06:09.550908 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 23 00:06:09.557748 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 23 00:06:09.569932 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 23 00:06:09.600580 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 23 00:06:09.603595 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 23 00:06:09.618372 initrd-setup-root-after-ignition[1380]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 00:06:09.622218 initrd-setup-root-after-ignition[1376]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 23 00:06:09.622218 initrd-setup-root-after-ignition[1376]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 23 00:06:09.631455 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 00:06:09.637355 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 23 00:06:09.643412 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 23 00:06:09.736651 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 23 00:06:09.737036 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 23 00:06:09.744875 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 23 00:06:09.749769 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 23 00:06:09.755064 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 23 00:06:09.756603 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 23 00:06:09.811543 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 00:06:09.819714 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 23 00:06:09.855567 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 23 00:06:09.858535 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 00:06:09.862104 systemd[1]: Stopped target timers.target - Timer Units. Jan 23 00:06:09.866648 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 23 00:06:09.866884 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 23 00:06:09.875616 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 23 00:06:09.883246 systemd[1]: Stopped target basic.target - Basic System. Jan 23 00:06:09.886151 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 23 00:06:09.890568 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 23 00:06:09.893878 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 23 00:06:09.898712 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 23 00:06:09.901830 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 23 00:06:09.909237 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 23 00:06:09.917097 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 23 00:06:09.920220 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 23 00:06:09.924625 systemd[1]: Stopped target swap.target - Swaps. Jan 23 00:06:09.927290 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 23 00:06:09.927541 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 23 00:06:09.935124 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 23 00:06:09.941678 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 00:06:09.944929 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 23 00:06:09.949340 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 00:06:09.952200 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 23 00:06:09.952426 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 23 00:06:09.960315 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 23 00:06:09.960624 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 23 00:06:09.969826 systemd[1]: ignition-files.service: Deactivated successfully. Jan 23 00:06:09.970067 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 23 00:06:09.979031 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 23 00:06:09.987945 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 23 00:06:09.988241 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 00:06:10.012978 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 23 00:06:10.015132 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 23 00:06:10.015421 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 00:06:10.018637 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 23 00:06:10.018876 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 23 00:06:10.050979 ignition[1400]: INFO : Ignition 2.22.0 Jan 23 00:06:10.050979 ignition[1400]: INFO : Stage: umount Jan 23 00:06:10.056649 ignition[1400]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 23 00:06:10.056649 ignition[1400]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/aws" Jan 23 00:06:10.056649 ignition[1400]: INFO : PUT http://169.254.169.254/latest/api/token: attempt #1 Jan 23 00:06:10.064889 ignition[1400]: INFO : PUT result: OK Jan 23 00:06:10.071889 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 23 00:06:10.074072 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 23 00:06:10.084736 ignition[1400]: INFO : umount: umount passed Jan 23 00:06:10.086786 ignition[1400]: INFO : Ignition finished successfully Jan 23 00:06:10.092600 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 23 00:06:10.092885 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 23 00:06:10.097378 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 23 00:06:10.099567 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 23 00:06:10.106085 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 23 00:06:10.106195 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 23 00:06:10.110040 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 23 00:06:10.110141 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 23 00:06:10.115964 systemd[1]: Stopped target network.target - Network. Jan 23 00:06:10.118347 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 23 00:06:10.118466 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 23 00:06:10.126401 systemd[1]: Stopped target paths.target - Path Units. Jan 23 00:06:10.130928 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 23 00:06:10.142356 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 00:06:10.157346 systemd[1]: Stopped target slices.target - Slice Units. Jan 23 00:06:10.162108 systemd[1]: Stopped target sockets.target - Socket Units. Jan 23 00:06:10.166708 systemd[1]: iscsid.socket: Deactivated successfully. Jan 23 00:06:10.166796 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 23 00:06:10.169325 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 23 00:06:10.169395 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 23 00:06:10.172158 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 23 00:06:10.172262 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 23 00:06:10.172553 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 23 00:06:10.172625 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 23 00:06:10.173080 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 23 00:06:10.173334 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 23 00:06:10.193308 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 23 00:06:10.209204 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 23 00:06:10.209634 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 23 00:06:10.222083 systemd[1]: run-credentials-systemd\x2dresolved.service.mount: Deactivated successfully. Jan 23 00:06:10.222923 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 23 00:06:10.223012 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 00:06:10.236357 systemd[1]: run-credentials-systemd\x2dtmpfiles\x2dsetup.service.mount: Deactivated successfully. Jan 23 00:06:10.237169 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 23 00:06:10.237424 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 23 00:06:10.251300 systemd[1]: run-credentials-systemd\x2dnetworkd.service.mount: Deactivated successfully. Jan 23 00:06:10.254804 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 23 00:06:10.263605 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 23 00:06:10.263732 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 23 00:06:10.278245 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 23 00:06:10.280586 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 23 00:06:10.280739 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 23 00:06:10.284934 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 23 00:06:10.285096 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:06:10.307753 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 23 00:06:10.307858 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 23 00:06:10.310744 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 00:06:10.315027 systemd[1]: run-credentials-systemd\x2dsysctl.service.mount: Deactivated successfully. Jan 23 00:06:10.322139 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 23 00:06:10.322341 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 23 00:06:10.334907 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 23 00:06:10.335524 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 23 00:06:10.361916 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 23 00:06:10.363450 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 00:06:10.372588 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 23 00:06:10.372675 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 23 00:06:10.375745 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 23 00:06:10.375809 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 00:06:10.378729 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 23 00:06:10.378814 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 23 00:06:10.386534 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 23 00:06:10.386649 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 23 00:06:10.394460 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 23 00:06:10.394584 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 23 00:06:10.411658 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 23 00:06:10.414756 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 23 00:06:10.414867 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 00:06:10.428830 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 23 00:06:10.428929 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 00:06:10.437678 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 23 00:06:10.437777 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:06:10.444422 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 23 00:06:10.449167 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 23 00:06:10.464038 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 23 00:06:10.464466 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 23 00:06:10.474156 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 23 00:06:10.477915 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 23 00:06:10.508444 systemd[1]: Switching root. Jan 23 00:06:10.562550 systemd-journald[256]: Journal stopped Jan 23 00:06:13.107441 systemd-journald[256]: Received SIGTERM from PID 1 (systemd). Jan 23 00:06:13.111616 kernel: SELinux: policy capability network_peer_controls=1 Jan 23 00:06:13.111669 kernel: SELinux: policy capability open_perms=1 Jan 23 00:06:13.111708 kernel: SELinux: policy capability extended_socket_class=1 Jan 23 00:06:13.111743 kernel: SELinux: policy capability always_check_network=0 Jan 23 00:06:13.111774 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 23 00:06:13.111804 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 23 00:06:13.111833 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 23 00:06:13.111862 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 23 00:06:13.111891 kernel: SELinux: policy capability userspace_initial_context=0 Jan 23 00:06:13.111918 kernel: audit: type=1403 audit(1769126770.981:2): auid=4294967295 ses=4294967295 lsm=selinux res=1 Jan 23 00:06:13.111957 systemd[1]: Successfully loaded SELinux policy in 116.902ms. Jan 23 00:06:13.112006 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 15.338ms. Jan 23 00:06:13.112043 systemd[1]: systemd 256.8 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 23 00:06:13.112075 systemd[1]: Detected virtualization amazon. Jan 23 00:06:13.112105 systemd[1]: Detected architecture arm64. Jan 23 00:06:13.112136 systemd[1]: Detected first boot. Jan 23 00:06:13.112164 systemd[1]: Initializing machine ID from VM UUID. Jan 23 00:06:13.112194 kernel: NET: Registered PF_VSOCK protocol family Jan 23 00:06:13.112224 zram_generator::config[1444]: No configuration found. Jan 23 00:06:13.112258 systemd[1]: Populated /etc with preset unit settings. Jan 23 00:06:13.112289 systemd[1]: run-credentials-systemd\x2djournald.service.mount: Deactivated successfully. Jan 23 00:06:13.112321 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 23 00:06:13.112351 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 23 00:06:13.112379 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 23 00:06:13.112414 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 23 00:06:13.112446 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 23 00:06:13.112474 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 23 00:06:13.114583 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 23 00:06:13.114639 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 23 00:06:13.114764 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 23 00:06:13.115164 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 23 00:06:13.115433 systemd[1]: Created slice user.slice - User and Session Slice. Jan 23 00:06:13.115466 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 23 00:06:13.119544 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 23 00:06:13.119601 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 23 00:06:13.119643 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 23 00:06:13.119682 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 23 00:06:13.119714 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 23 00:06:13.119747 systemd[1]: Expecting device dev-ttyS0.device - /dev/ttyS0... Jan 23 00:06:13.119778 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 23 00:06:13.119806 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 23 00:06:13.119837 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 23 00:06:13.119867 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 23 00:06:13.119896 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 23 00:06:13.119929 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 23 00:06:13.119958 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 23 00:06:13.119989 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 23 00:06:13.120020 systemd[1]: Reached target slices.target - Slice Units. Jan 23 00:06:13.120050 systemd[1]: Reached target swap.target - Swaps. Jan 23 00:06:13.120077 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 23 00:06:13.120105 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 23 00:06:13.120135 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 23 00:06:13.120165 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 23 00:06:13.120198 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 23 00:06:13.120227 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 23 00:06:13.120257 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 23 00:06:13.120286 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 23 00:06:13.120319 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 23 00:06:13.120350 systemd[1]: Mounting media.mount - External Media Directory... Jan 23 00:06:13.120427 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 23 00:06:13.121228 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 23 00:06:13.127217 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 23 00:06:13.127275 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 23 00:06:13.127305 systemd[1]: Reached target machines.target - Containers. Jan 23 00:06:13.127334 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 23 00:06:13.127363 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:06:13.127394 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 23 00:06:13.127423 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 23 00:06:13.127453 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 00:06:13.127484 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 00:06:13.127545 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 00:06:13.127577 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 23 00:06:13.127609 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 00:06:13.127638 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 23 00:06:13.127669 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 23 00:06:13.127701 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 23 00:06:13.127733 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 23 00:06:13.127763 systemd[1]: Stopped systemd-fsck-usr.service. Jan 23 00:06:13.127795 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:06:13.127830 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 23 00:06:13.127859 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 23 00:06:13.127888 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 23 00:06:13.127919 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 23 00:06:13.127947 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 23 00:06:13.127976 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 23 00:06:13.128007 systemd[1]: verity-setup.service: Deactivated successfully. Jan 23 00:06:13.128037 systemd[1]: Stopped verity-setup.service. Jan 23 00:06:13.128070 kernel: fuse: init (API version 7.41) Jan 23 00:06:13.128099 kernel: loop: module loaded Jan 23 00:06:13.128127 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 23 00:06:13.128155 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 23 00:06:13.128183 systemd[1]: Mounted media.mount - External Media Directory. Jan 23 00:06:13.128216 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 23 00:06:13.128246 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 23 00:06:13.128278 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 23 00:06:13.128307 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 23 00:06:13.128334 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 23 00:06:13.128365 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 23 00:06:13.128399 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 00:06:13.128429 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 00:06:13.128457 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 00:06:13.128485 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 00:06:13.128539 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 23 00:06:13.128569 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 23 00:06:13.128597 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 00:06:13.128625 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 00:06:13.128660 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 23 00:06:13.128691 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 23 00:06:13.128721 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 23 00:06:13.128749 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 23 00:06:13.128776 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 23 00:06:13.128804 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 23 00:06:13.128833 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 23 00:06:13.128861 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 23 00:06:13.128889 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 23 00:06:13.128921 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 23 00:06:13.128949 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:06:13.128979 kernel: ACPI: bus type drm_connector registered Jan 23 00:06:13.129011 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 23 00:06:13.129052 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 00:06:13.129085 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 23 00:06:13.129116 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 00:06:13.129144 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 23 00:06:13.129175 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 23 00:06:13.129204 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 23 00:06:13.129286 systemd-journald[1526]: Collecting audit messages is disabled. Jan 23 00:06:13.129345 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 00:06:13.129379 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 00:06:13.129408 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 23 00:06:13.129436 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 23 00:06:13.129465 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 23 00:06:13.133785 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 23 00:06:13.133873 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 23 00:06:13.133913 systemd-journald[1526]: Journal started Jan 23 00:06:13.133976 systemd-journald[1526]: Runtime Journal (/run/log/journal/ec2fac5b7f3ec7932b3dbb8c05ad2e78) is 8M, max 75.3M, 67.3M free. Jan 23 00:06:12.340175 systemd[1]: Queued start job for default target multi-user.target. Jan 23 00:06:12.355306 systemd[1]: Unnecessary job was removed for dev-nvme0n1p6.device - /dev/nvme0n1p6. Jan 23 00:06:12.356153 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 23 00:06:13.141185 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 23 00:06:13.150168 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 23 00:06:13.150304 systemd[1]: Started systemd-journald.service - Journal Service. Jan 23 00:06:13.227057 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 23 00:06:13.280849 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 23 00:06:13.302552 kernel: loop0: detected capacity change from 0 to 100632 Jan 23 00:06:13.317169 systemd-journald[1526]: Time spent on flushing to /var/log/journal/ec2fac5b7f3ec7932b3dbb8c05ad2e78 is 75.415ms for 926 entries. Jan 23 00:06:13.317169 systemd-journald[1526]: System Journal (/var/log/journal/ec2fac5b7f3ec7932b3dbb8c05ad2e78) is 8M, max 195.6M, 187.6M free. Jan 23 00:06:13.407466 systemd-journald[1526]: Received client request to flush runtime journal. Jan 23 00:06:13.341708 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 23 00:06:13.359867 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 23 00:06:13.373653 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 23 00:06:13.382032 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 23 00:06:13.417256 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 23 00:06:13.422487 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 23 00:06:13.431722 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 23 00:06:13.451538 kernel: loop1: detected capacity change from 0 to 200800 Jan 23 00:06:13.457976 systemd-tmpfiles[1592]: ACLs are not supported, ignoring. Jan 23 00:06:13.458010 systemd-tmpfiles[1592]: ACLs are not supported, ignoring. Jan 23 00:06:13.469607 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 23 00:06:13.598807 kernel: loop2: detected capacity change from 0 to 119840 Jan 23 00:06:13.721582 kernel: loop3: detected capacity change from 0 to 61264 Jan 23 00:06:13.852545 kernel: loop4: detected capacity change from 0 to 100632 Jan 23 00:06:13.875523 kernel: loop5: detected capacity change from 0 to 200800 Jan 23 00:06:13.903531 kernel: loop6: detected capacity change from 0 to 119840 Jan 23 00:06:13.919538 kernel: loop7: detected capacity change from 0 to 61264 Jan 23 00:06:13.935243 (sd-merge)[1606]: Using extensions 'containerd-flatcar', 'docker-flatcar', 'kubernetes', 'oem-ami'. Jan 23 00:06:13.937628 (sd-merge)[1606]: Merged extensions into '/usr'. Jan 23 00:06:13.945176 systemd[1]: Reload requested from client PID 1559 ('systemd-sysext') (unit systemd-sysext.service)... Jan 23 00:06:13.945332 systemd[1]: Reloading... Jan 23 00:06:14.138568 zram_generator::config[1634]: No configuration found. Jan 23 00:06:14.573405 systemd[1]: Reloading finished in 627 ms. Jan 23 00:06:14.599572 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 23 00:06:14.603248 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 23 00:06:14.618763 systemd[1]: Starting ensure-sysext.service... Jan 23 00:06:14.625044 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 23 00:06:14.632172 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 23 00:06:14.669783 systemd[1]: Reload requested from client PID 1684 ('systemctl') (unit ensure-sysext.service)... Jan 23 00:06:14.669830 systemd[1]: Reloading... Jan 23 00:06:14.675553 ldconfig[1552]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 23 00:06:14.694587 systemd-tmpfiles[1685]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 23 00:06:14.695128 systemd-tmpfiles[1685]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 23 00:06:14.695989 systemd-tmpfiles[1685]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 23 00:06:14.696681 systemd-tmpfiles[1685]: /usr/lib/tmpfiles.d/systemd-flatcar.conf:6: Duplicate line for path "/var/log/journal", ignoring. Jan 23 00:06:14.698428 systemd-tmpfiles[1685]: /usr/lib/tmpfiles.d/systemd.conf:29: Duplicate line for path "/var/lib/systemd", ignoring. Jan 23 00:06:14.699322 systemd-tmpfiles[1685]: ACLs are not supported, ignoring. Jan 23 00:06:14.699449 systemd-tmpfiles[1685]: ACLs are not supported, ignoring. Jan 23 00:06:14.707879 systemd-tmpfiles[1685]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 00:06:14.707904 systemd-tmpfiles[1685]: Skipping /boot Jan 23 00:06:14.740000 systemd-tmpfiles[1685]: Detected autofs mount point /boot during canonicalization of boot. Jan 23 00:06:14.740032 systemd-tmpfiles[1685]: Skipping /boot Jan 23 00:06:14.784577 zram_generator::config[1713]: No configuration found. Jan 23 00:06:14.823337 systemd-udevd[1686]: Using default interface naming scheme 'v255'. Jan 23 00:06:15.150695 (udev-worker)[1768]: Network interface NamePolicy= disabled on kernel command line. Jan 23 00:06:15.393452 systemd[1]: Condition check resulted in dev-ttyS0.device - /dev/ttyS0 being skipped. Jan 23 00:06:15.394243 systemd[1]: Reloading finished in 723 ms. Jan 23 00:06:15.428152 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 23 00:06:15.431803 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 23 00:06:15.458183 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 23 00:06:15.496788 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 00:06:15.504525 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 23 00:06:15.510991 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 23 00:06:15.518901 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 23 00:06:15.533978 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 23 00:06:15.540903 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 23 00:06:15.568191 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 23 00:06:15.621587 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:06:15.643021 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 23 00:06:15.648702 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 23 00:06:15.655120 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 23 00:06:15.658764 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:06:15.659099 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:06:15.685645 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 23 00:06:15.693281 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 23 00:06:15.700750 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 23 00:06:15.704914 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 23 00:06:15.705180 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 23 00:06:15.705608 systemd[1]: Reached target time-set.target - System Time Set. Jan 23 00:06:15.708428 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 23 00:06:15.725156 systemd[1]: Finished ensure-sysext.service. Jan 23 00:06:15.761620 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 23 00:06:15.801769 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 23 00:06:15.806700 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 23 00:06:15.825725 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 23 00:06:15.848198 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 23 00:06:15.848688 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 23 00:06:15.872151 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 23 00:06:15.872565 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 23 00:06:15.883065 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 23 00:06:15.884763 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 23 00:06:15.889257 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 23 00:06:15.890011 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 23 00:06:15.895030 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 23 00:06:15.899217 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 23 00:06:15.899362 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 23 00:06:15.961235 augenrules[1884]: No rules Jan 23 00:06:15.966156 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 00:06:15.967619 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 00:06:16.061574 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 23 00:06:16.192600 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - Amazon Elastic Block Store OEM. Jan 23 00:06:16.197658 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 23 00:06:16.239670 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 23 00:06:16.277683 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 23 00:06:16.400355 systemd-resolved[1824]: Positive Trust Anchors: Jan 23 00:06:16.400393 systemd-resolved[1824]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 23 00:06:16.400454 systemd-resolved[1824]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 23 00:06:16.403564 systemd-networkd[1822]: lo: Link UP Jan 23 00:06:16.404005 systemd-networkd[1822]: lo: Gained carrier Jan 23 00:06:16.406948 systemd-networkd[1822]: Enumeration completed Jan 23 00:06:16.407231 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 23 00:06:16.412934 systemd-networkd[1822]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:06:16.412959 systemd-networkd[1822]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 23 00:06:16.414315 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 23 00:06:16.421817 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 23 00:06:16.427075 systemd-networkd[1822]: eth0: Link UP Jan 23 00:06:16.427717 systemd-networkd[1822]: eth0: Gained carrier Jan 23 00:06:16.427760 systemd-networkd[1822]: eth0: found matching network '/usr/lib/systemd/network/zz-default.network', based on potentially unpredictable interface name. Jan 23 00:06:16.434827 systemd-resolved[1824]: Defaulting to hostname 'linux'. Jan 23 00:06:16.437623 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 23 00:06:16.440573 systemd[1]: Reached target network.target - Network. Jan 23 00:06:16.443479 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 23 00:06:16.446738 systemd[1]: Reached target sysinit.target - System Initialization. Jan 23 00:06:16.449788 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 23 00:06:16.455271 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 23 00:06:16.459640 systemd-networkd[1822]: eth0: DHCPv4 address 172.31.18.130/20, gateway 172.31.16.1 acquired from 172.31.16.1 Jan 23 00:06:16.463006 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 23 00:06:16.466738 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 23 00:06:16.469754 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 23 00:06:16.472653 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 23 00:06:16.472716 systemd[1]: Reached target paths.target - Path Units. Jan 23 00:06:16.475580 systemd[1]: Reached target timers.target - Timer Units. Jan 23 00:06:16.479448 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 23 00:06:16.484558 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 23 00:06:16.490905 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 23 00:06:16.494310 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 23 00:06:16.497084 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 23 00:06:16.504528 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 23 00:06:16.507528 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 23 00:06:16.511636 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 23 00:06:16.517529 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 23 00:06:16.521142 systemd[1]: Reached target sockets.target - Socket Units. Jan 23 00:06:16.523831 systemd[1]: Reached target basic.target - Basic System. Jan 23 00:06:16.526248 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 23 00:06:16.526421 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 23 00:06:16.529377 systemd[1]: Starting containerd.service - containerd container runtime... Jan 23 00:06:16.535793 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 23 00:06:16.543972 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 23 00:06:16.554566 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 23 00:06:16.562780 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 23 00:06:16.567859 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 23 00:06:16.570287 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 23 00:06:16.573720 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 23 00:06:16.590445 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 00:06:16.599813 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 23 00:06:16.605844 systemd[1]: Starting setup-oem.service - Setup OEM... Jan 23 00:06:16.614953 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 23 00:06:16.622993 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 23 00:06:16.642293 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 23 00:06:16.646406 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 23 00:06:16.660061 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 23 00:06:16.669699 systemd[1]: Starting update-engine.service - Update Engine... Jan 23 00:06:16.674745 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 23 00:06:16.685624 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 23 00:06:16.690711 extend-filesystems[1975]: Found /dev/nvme0n1p6 Jan 23 00:06:16.698583 jq[1974]: false Jan 23 00:06:16.708318 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 23 00:06:16.708913 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 23 00:06:16.734346 extend-filesystems[1975]: Found /dev/nvme0n1p9 Jan 23 00:06:16.744768 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 23 00:06:16.745240 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 23 00:06:16.775851 extend-filesystems[1975]: Checking size of /dev/nvme0n1p9 Jan 23 00:06:16.793809 dbus-daemon[1972]: [system] SELinux support is enabled Jan 23 00:06:16.794107 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 23 00:06:16.809205 dbus-daemon[1972]: [system] Activating systemd to hand-off: service name='org.freedesktop.hostname1' unit='dbus-org.freedesktop.hostname1.service' requested by ':1.0' (uid=244 pid=1822 comm="/usr/lib/systemd/systemd-networkd" label="system_u:system_r:kernel_t:s0") Jan 23 00:06:16.826729 jq[1989]: true Jan 23 00:06:16.840520 extend-filesystems[1975]: Resized partition /dev/nvme0n1p9 Jan 23 00:06:16.856523 extend-filesystems[2020]: resize2fs 1.47.3 (8-Jul-2025) Jan 23 00:06:16.865547 systemd[1]: motdgen.service: Deactivated successfully. Jan 23 00:06:16.867597 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 23 00:06:16.892530 kernel: EXT4-fs (nvme0n1p9): resizing filesystem from 553472 to 3587067 blocks Jan 23 00:06:16.928075 coreos-metadata[1971]: Jan 23 00:06:16.927 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 00:06:16.936629 (ntainerd)[2014]: containerd.service: Referenced but unset environment variable evaluates to an empty string: TORCX_IMAGEDIR, TORCX_UNPACKDIR Jan 23 00:06:16.941616 coreos-metadata[1971]: Jan 23 00:06:16.940 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-id: Attempt #1 Jan 23 00:06:16.952762 coreos-metadata[1971]: Jan 23 00:06:16.952 INFO Fetch successful Jan 23 00:06:16.952762 coreos-metadata[1971]: Jan 23 00:06:16.952 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/instance-type: Attempt #1 Jan 23 00:06:16.955260 tar[1992]: linux-arm64/LICENSE Jan 23 00:06:16.958114 tar[1992]: linux-arm64/helm Jan 23 00:06:16.959836 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 23 00:06:16.962631 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 23 00:06:16.985369 jq[2022]: true Jan 23 00:06:16.989192 coreos-metadata[1971]: Jan 23 00:06:16.984 INFO Fetch successful Jan 23 00:06:16.989192 coreos-metadata[1971]: Jan 23 00:06:16.984 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/local-ipv4: Attempt #1 Jan 23 00:06:16.989192 coreos-metadata[1971]: Jan 23 00:06:16.986 INFO Fetch successful Jan 23 00:06:16.989192 coreos-metadata[1971]: Jan 23 00:06:16.986 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-ipv4: Attempt #1 Jan 23 00:06:16.962880 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 23 00:06:16.966733 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 23 00:06:16.997618 coreos-metadata[1971]: Jan 23 00:06:16.992 INFO Fetch successful Jan 23 00:06:16.997618 coreos-metadata[1971]: Jan 23 00:06:16.992 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/ipv6: Attempt #1 Jan 23 00:06:16.997618 coreos-metadata[1971]: Jan 23 00:06:16.994 INFO Fetch failed with 404: resource not found Jan 23 00:06:16.997618 coreos-metadata[1971]: Jan 23 00:06:16.994 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone: Attempt #1 Jan 23 00:06:16.966772 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 23 00:06:16.974588 systemd[1]: Finished setup-oem.service - Setup OEM. Jan 23 00:06:17.001776 dbus-daemon[1972]: [system] Successfully activated service 'org.freedesktop.systemd1' Jan 23 00:06:17.004886 coreos-metadata[1971]: Jan 23 00:06:17.003 INFO Fetch successful Jan 23 00:06:17.004886 coreos-metadata[1971]: Jan 23 00:06:17.003 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/placement/availability-zone-id: Attempt #1 Jan 23 00:06:17.009718 systemd[1]: Starting systemd-hostnamed.service - Hostname Service... Jan 23 00:06:17.018999 coreos-metadata[1971]: Jan 23 00:06:17.013 INFO Fetch successful Jan 23 00:06:17.018999 coreos-metadata[1971]: Jan 23 00:06:17.013 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/hostname: Attempt #1 Jan 23 00:06:17.018999 coreos-metadata[1971]: Jan 23 00:06:17.013 INFO Fetch successful Jan 23 00:06:17.018999 coreos-metadata[1971]: Jan 23 00:06:17.013 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-hostname: Attempt #1 Jan 23 00:06:17.018999 coreos-metadata[1971]: Jan 23 00:06:17.015 INFO Fetch successful Jan 23 00:06:17.018999 coreos-metadata[1971]: Jan 23 00:06:17.015 INFO Fetching http://169.254.169.254/2021-01-03/dynamic/instance-identity/document: Attempt #1 Jan 23 00:06:17.019357 update_engine[1988]: I20260123 00:06:17.013044 1988 main.cc:92] Flatcar Update Engine starting Jan 23 00:06:17.020929 coreos-metadata[1971]: Jan 23 00:06:17.020 INFO Fetch successful Jan 23 00:06:17.031104 ntpd[1977]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:36:07 UTC 2026 (1): Starting Jan 23 00:06:17.036951 ntpd[1977]: 23 Jan 00:06:17 ntpd[1977]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:36:07 UTC 2026 (1): Starting Jan 23 00:06:17.036951 ntpd[1977]: 23 Jan 00:06:17 ntpd[1977]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 00:06:17.036951 ntpd[1977]: 23 Jan 00:06:17 ntpd[1977]: ---------------------------------------------------- Jan 23 00:06:17.036951 ntpd[1977]: 23 Jan 00:06:17 ntpd[1977]: ntp-4 is maintained by Network Time Foundation, Jan 23 00:06:17.036951 ntpd[1977]: 23 Jan 00:06:17 ntpd[1977]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 00:06:17.036951 ntpd[1977]: 23 Jan 00:06:17 ntpd[1977]: corporation. Support and training for ntp-4 are Jan 23 00:06:17.036951 ntpd[1977]: 23 Jan 00:06:17 ntpd[1977]: available at https://www.nwtime.org/support Jan 23 00:06:17.036951 ntpd[1977]: 23 Jan 00:06:17 ntpd[1977]: ---------------------------------------------------- Jan 23 00:06:17.031229 ntpd[1977]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 00:06:17.031247 ntpd[1977]: ---------------------------------------------------- Jan 23 00:06:17.031264 ntpd[1977]: ntp-4 is maintained by Network Time Foundation, Jan 23 00:06:17.031280 ntpd[1977]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 00:06:17.031295 ntpd[1977]: corporation. Support and training for ntp-4 are Jan 23 00:06:17.031311 ntpd[1977]: available at https://www.nwtime.org/support Jan 23 00:06:17.031327 ntpd[1977]: ---------------------------------------------------- Jan 23 00:06:17.050757 systemd[1]: Started update-engine.service - Update Engine. Jan 23 00:06:17.057146 update_engine[1988]: I20260123 00:06:17.055739 1988 update_check_scheduler.cc:74] Next update check in 11m30s Jan 23 00:06:17.057208 ntpd[1977]: 23 Jan 00:06:17 ntpd[1977]: proto: precision = 0.096 usec (-23) Jan 23 00:06:17.057208 ntpd[1977]: 23 Jan 00:06:17 ntpd[1977]: basedate set to 2026-01-10 Jan 23 00:06:17.057208 ntpd[1977]: 23 Jan 00:06:17 ntpd[1977]: gps base set to 2026-01-11 (week 2401) Jan 23 00:06:17.057208 ntpd[1977]: 23 Jan 00:06:17 ntpd[1977]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 00:06:17.057208 ntpd[1977]: 23 Jan 00:06:17 ntpd[1977]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 00:06:17.057208 ntpd[1977]: 23 Jan 00:06:17 ntpd[1977]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 00:06:17.057208 ntpd[1977]: 23 Jan 00:06:17 ntpd[1977]: Listen normally on 3 eth0 172.31.18.130:123 Jan 23 00:06:17.057208 ntpd[1977]: 23 Jan 00:06:17 ntpd[1977]: Listen normally on 4 lo [::1]:123 Jan 23 00:06:17.057208 ntpd[1977]: 23 Jan 00:06:17 ntpd[1977]: bind(21) AF_INET6 [fe80::4e6:acff:fe36:941%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 00:06:17.057208 ntpd[1977]: 23 Jan 00:06:17 ntpd[1977]: unable to create socket on eth0 (5) for [fe80::4e6:acff:fe36:941%2]:123 Jan 23 00:06:17.053144 ntpd[1977]: proto: precision = 0.096 usec (-23) Jan 23 00:06:17.054375 ntpd[1977]: basedate set to 2026-01-10 Jan 23 00:06:17.069392 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 23 00:06:17.054403 ntpd[1977]: gps base set to 2026-01-11 (week 2401) Jan 23 00:06:17.054624 ntpd[1977]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 00:06:17.074837 systemd-coredump[2043]: Process 1977 (ntpd) of user 0 terminated abnormally with signal 11/SEGV, processing... Jan 23 00:06:17.054671 ntpd[1977]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 00:06:17.079654 systemd[1]: Created slice system-systemd\x2dcoredump.slice - Slice /system/systemd-coredump. Jan 23 00:06:17.055082 ntpd[1977]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 00:06:17.055130 ntpd[1977]: Listen normally on 3 eth0 172.31.18.130:123 Jan 23 00:06:17.056315 ntpd[1977]: Listen normally on 4 lo [::1]:123 Jan 23 00:06:17.056377 ntpd[1977]: bind(21) AF_INET6 [fe80::4e6:acff:fe36:941%2]:123 flags 0x811 failed: Cannot assign requested address Jan 23 00:06:17.056414 ntpd[1977]: unable to create socket on eth0 (5) for [fe80::4e6:acff:fe36:941%2]:123 Jan 23 00:06:17.092228 systemd[1]: Started systemd-coredump@0-2043-0.service - Process Core Dump (PID 2043/UID 0). Jan 23 00:06:17.175538 kernel: EXT4-fs (nvme0n1p9): resized filesystem to 3587067 Jan 23 00:06:17.198682 extend-filesystems[2020]: Filesystem at /dev/nvme0n1p9 is mounted on /; on-line resizing required Jan 23 00:06:17.198682 extend-filesystems[2020]: old_desc_blocks = 1, new_desc_blocks = 2 Jan 23 00:06:17.198682 extend-filesystems[2020]: The filesystem on /dev/nvme0n1p9 is now 3587067 (4k) blocks long. Jan 23 00:06:17.234690 extend-filesystems[1975]: Resized filesystem in /dev/nvme0n1p9 Jan 23 00:06:17.203262 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 23 00:06:17.203837 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 23 00:06:17.258143 systemd-logind[1983]: Watching system buttons on /dev/input/event0 (Power Button) Jan 23 00:06:17.258185 systemd-logind[1983]: Watching system buttons on /dev/input/event1 (Sleep Button) Jan 23 00:06:17.259404 systemd-logind[1983]: New seat seat0. Jan 23 00:06:17.271792 systemd[1]: Started systemd-logind.service - User Login Management. Jan 23 00:06:17.285204 bash[2064]: Updated "/home/core/.ssh/authorized_keys" Jan 23 00:06:17.309267 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 23 00:06:17.313529 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 23 00:06:17.340052 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 23 00:06:17.351132 systemd[1]: Starting sshkeys.service... Jan 23 00:06:17.498580 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 23 00:06:17.506090 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 23 00:06:17.680249 coreos-metadata[2118]: Jan 23 00:06:17.680 INFO Putting http://169.254.169.254/latest/api/token: Attempt #1 Jan 23 00:06:17.684773 coreos-metadata[2118]: Jan 23 00:06:17.684 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys: Attempt #1 Jan 23 00:06:17.691858 coreos-metadata[2118]: Jan 23 00:06:17.691 INFO Fetch successful Jan 23 00:06:17.691858 coreos-metadata[2118]: Jan 23 00:06:17.691 INFO Fetching http://169.254.169.254/2021-01-03/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 23 00:06:17.695700 coreos-metadata[2118]: Jan 23 00:06:17.695 INFO Fetch successful Jan 23 00:06:17.699006 unknown[2118]: wrote ssh authorized keys file for user: core Jan 23 00:06:17.752544 containerd[2014]: time="2026-01-23T00:06:17Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 23 00:06:17.764794 update-ssh-keys[2145]: Updated "/home/core/.ssh/authorized_keys" Jan 23 00:06:17.768671 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 23 00:06:17.780619 systemd[1]: Finished sshkeys.service. Jan 23 00:06:17.799243 containerd[2014]: time="2026-01-23T00:06:17.798958704Z" level=info msg="starting containerd" revision=4ac6c20c7bbf8177f29e46bbdc658fec02ffb8ad version=v2.0.7 Jan 23 00:06:17.828915 systemd-networkd[1822]: eth0: Gained IPv6LL Jan 23 00:06:17.842058 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 23 00:06:17.860128 systemd[1]: Reached target network-online.target - Network is Online. Jan 23 00:06:17.866879 systemd[1]: Started amazon-ssm-agent.service - amazon-ssm-agent. Jan 23 00:06:17.876534 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:06:17.884136 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 23 00:06:17.928157 systemd[1]: Started systemd-hostnamed.service - Hostname Service. Jan 23 00:06:17.941766 containerd[2014]: time="2026-01-23T00:06:17.941702773Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="13.944µs" Jan 23 00:06:17.947335 containerd[2014]: time="2026-01-23T00:06:17.947093965Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 23 00:06:17.947335 containerd[2014]: time="2026-01-23T00:06:17.947157805Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 23 00:06:17.949543 containerd[2014]: time="2026-01-23T00:06:17.948077629Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 23 00:06:17.949737 containerd[2014]: time="2026-01-23T00:06:17.949689457Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 23 00:06:17.952520 containerd[2014]: time="2026-01-23T00:06:17.951302917Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 00:06:17.952520 containerd[2014]: time="2026-01-23T00:06:17.951475477Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 23 00:06:17.952520 containerd[2014]: time="2026-01-23T00:06:17.951527545Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 00:06:17.952520 containerd[2014]: time="2026-01-23T00:06:17.951902725Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 23 00:06:17.952520 containerd[2014]: time="2026-01-23T00:06:17.951939565Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 00:06:17.952520 containerd[2014]: time="2026-01-23T00:06:17.951966145Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 23 00:06:17.952520 containerd[2014]: time="2026-01-23T00:06:17.951987481Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 23 00:06:17.952520 containerd[2014]: time="2026-01-23T00:06:17.952145665Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 23 00:06:17.962601 containerd[2014]: time="2026-01-23T00:06:17.960648709Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 00:06:17.962601 containerd[2014]: time="2026-01-23T00:06:17.960746545Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 23 00:06:17.962601 containerd[2014]: time="2026-01-23T00:06:17.960775717Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 23 00:06:17.962601 containerd[2014]: time="2026-01-23T00:06:17.960839089Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 23 00:06:17.962601 containerd[2014]: time="2026-01-23T00:06:17.961259257Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 23 00:06:17.962601 containerd[2014]: time="2026-01-23T00:06:17.961413793Z" level=info msg="metadata content store policy set" policy=shared Jan 23 00:06:17.958879 dbus-daemon[1972]: [system] Successfully activated service 'org.freedesktop.hostname1' Jan 23 00:06:17.974486 containerd[2014]: time="2026-01-23T00:06:17.971776537Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 23 00:06:17.974486 containerd[2014]: time="2026-01-23T00:06:17.971877697Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 23 00:06:17.974486 containerd[2014]: time="2026-01-23T00:06:17.971909557Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 23 00:06:17.974486 containerd[2014]: time="2026-01-23T00:06:17.971938117Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 23 00:06:17.974486 containerd[2014]: time="2026-01-23T00:06:17.971976325Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 23 00:06:17.974486 containerd[2014]: time="2026-01-23T00:06:17.972012517Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 23 00:06:17.974486 containerd[2014]: time="2026-01-23T00:06:17.972049081Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 23 00:06:17.974486 containerd[2014]: time="2026-01-23T00:06:17.972078889Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 23 00:06:17.974486 containerd[2014]: time="2026-01-23T00:06:17.972110509Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 23 00:06:17.974486 containerd[2014]: time="2026-01-23T00:06:17.972140869Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 23 00:06:17.974486 containerd[2014]: time="2026-01-23T00:06:17.972166297Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 23 00:06:17.974486 containerd[2014]: time="2026-01-23T00:06:17.972196873Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 23 00:06:17.974486 containerd[2014]: time="2026-01-23T00:06:17.972473353Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 23 00:06:17.974486 containerd[2014]: time="2026-01-23T00:06:17.972556729Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 23 00:06:17.975329 containerd[2014]: time="2026-01-23T00:06:17.972595897Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 23 00:06:17.975329 containerd[2014]: time="2026-01-23T00:06:17.972635317Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 23 00:06:17.975329 containerd[2014]: time="2026-01-23T00:06:17.972665437Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 23 00:06:17.975329 containerd[2014]: time="2026-01-23T00:06:17.972693313Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 23 00:06:17.975329 containerd[2014]: time="2026-01-23T00:06:17.972721681Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 23 00:06:17.975329 containerd[2014]: time="2026-01-23T00:06:17.972748537Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 23 00:06:17.975329 containerd[2014]: time="2026-01-23T00:06:17.972777421Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 23 00:06:17.975329 containerd[2014]: time="2026-01-23T00:06:17.972806689Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 23 00:06:17.975329 containerd[2014]: time="2026-01-23T00:06:17.972833893Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 23 00:06:17.975329 containerd[2014]: time="2026-01-23T00:06:17.973208197Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 23 00:06:17.975329 containerd[2014]: time="2026-01-23T00:06:17.973245265Z" level=info msg="Start snapshots syncer" Jan 23 00:06:17.975329 containerd[2014]: time="2026-01-23T00:06:17.973295701Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 23 00:06:17.978542 containerd[2014]: time="2026-01-23T00:06:17.977709001Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"/opt/cni/bin\",\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 23 00:06:17.978542 containerd[2014]: time="2026-01-23T00:06:17.977899825Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 23 00:06:17.978874 containerd[2014]: time="2026-01-23T00:06:17.978014965Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 23 00:06:17.978874 containerd[2014]: time="2026-01-23T00:06:17.978299821Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 23 00:06:17.978874 containerd[2014]: time="2026-01-23T00:06:17.978360613Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 23 00:06:17.978874 containerd[2014]: time="2026-01-23T00:06:17.978394777Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 23 00:06:17.978874 containerd[2014]: time="2026-01-23T00:06:17.978432361Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 23 00:06:17.979572 containerd[2014]: time="2026-01-23T00:06:17.978477829Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 23 00:06:17.979572 containerd[2014]: time="2026-01-23T00:06:17.979402477Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 23 00:06:17.982192 containerd[2014]: time="2026-01-23T00:06:17.979449025Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 23 00:06:17.982192 containerd[2014]: time="2026-01-23T00:06:17.980616385Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 23 00:06:17.982192 containerd[2014]: time="2026-01-23T00:06:17.980660581Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 23 00:06:17.982192 containerd[2014]: time="2026-01-23T00:06:17.980690473Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 23 00:06:17.982192 containerd[2014]: time="2026-01-23T00:06:17.980745361Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 00:06:17.982192 containerd[2014]: time="2026-01-23T00:06:17.980778685Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 23 00:06:17.982192 containerd[2014]: time="2026-01-23T00:06:17.980802037Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 00:06:17.982192 containerd[2014]: time="2026-01-23T00:06:17.980826505Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 23 00:06:17.982192 containerd[2014]: time="2026-01-23T00:06:17.980847937Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 23 00:06:17.982192 containerd[2014]: time="2026-01-23T00:06:17.980872705Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 23 00:06:17.982192 containerd[2014]: time="2026-01-23T00:06:17.980899501Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 23 00:06:17.982192 containerd[2014]: time="2026-01-23T00:06:17.981072673Z" level=info msg="runtime interface created" Jan 23 00:06:17.982192 containerd[2014]: time="2026-01-23T00:06:17.981090205Z" level=info msg="created NRI interface" Jan 23 00:06:17.982192 containerd[2014]: time="2026-01-23T00:06:17.981112993Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 23 00:06:17.982192 containerd[2014]: time="2026-01-23T00:06:17.981143149Z" level=info msg="Connect containerd service" Jan 23 00:06:17.982915 containerd[2014]: time="2026-01-23T00:06:17.981229705Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 23 00:06:17.983994 dbus-daemon[1972]: [system] Activating via systemd: service name='org.freedesktop.PolicyKit1' unit='polkit.service' requested by ':1.7' (uid=0 pid=2038 comm="/usr/lib/systemd/systemd-hostnamed" label="system_u:system_r:kernel_t:s0") Jan 23 00:06:17.995904 containerd[2014]: time="2026-01-23T00:06:17.990026941Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 23 00:06:17.996997 systemd[1]: Starting polkit.service - Authorization Manager... Jan 23 00:06:18.001001 locksmithd[2042]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 23 00:06:18.186415 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 23 00:06:18.205552 amazon-ssm-agent[2162]: Initializing new seelog logger Jan 23 00:06:18.207412 amazon-ssm-agent[2162]: New Seelog Logger Creation Complete Jan 23 00:06:18.207412 amazon-ssm-agent[2162]: 2026/01/23 00:06:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 00:06:18.207412 amazon-ssm-agent[2162]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 00:06:18.207412 amazon-ssm-agent[2162]: 2026/01/23 00:06:18 processing appconfig overrides Jan 23 00:06:18.208595 amazon-ssm-agent[2162]: 2026/01/23 00:06:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 00:06:18.208727 amazon-ssm-agent[2162]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 00:06:18.208932 amazon-ssm-agent[2162]: 2026/01/23 00:06:18 processing appconfig overrides Jan 23 00:06:18.209259 amazon-ssm-agent[2162]: 2026/01/23 00:06:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 00:06:18.209336 amazon-ssm-agent[2162]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 00:06:18.209548 amazon-ssm-agent[2162]: 2026/01/23 00:06:18 processing appconfig overrides Jan 23 00:06:18.210793 amazon-ssm-agent[2162]: 2026-01-23 00:06:18.2080 INFO Proxy environment variables: Jan 23 00:06:18.218192 amazon-ssm-agent[2162]: 2026/01/23 00:06:18 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 00:06:18.218192 amazon-ssm-agent[2162]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 00:06:18.218192 amazon-ssm-agent[2162]: 2026/01/23 00:06:18 processing appconfig overrides Jan 23 00:06:18.303687 systemd-coredump[2045]: Process 1977 (ntpd) of user 0 dumped core. Module libnss_usrfiles.so.2 without build-id. Module libgcc_s.so.1 without build-id. Module libc.so.6 without build-id. Module libcrypto.so.3 without build-id. Module libm.so.6 without build-id. Module libcap.so.2 without build-id. Module ntpd without build-id. Stack trace of thread 1977: #0 0x0000aaaad3b20b5c n/a (ntpd + 0x60b5c) #1 0x0000aaaad3acfe60 n/a (ntpd + 0xfe60) #2 0x0000aaaad3ad0240 n/a (ntpd + 0x10240) #3 0x0000aaaad3acbe14 n/a (ntpd + 0xbe14) #4 0x0000aaaad3acd3ec n/a (ntpd + 0xd3ec) #5 0x0000aaaad3ad5a38 n/a (ntpd + 0x15a38) #6 0x0000aaaad3ac738c n/a (ntpd + 0x738c) #7 0x0000ffff971c2034 n/a (libc.so.6 + 0x22034) #8 0x0000ffff971c2118 __libc_start_main (libc.so.6 + 0x22118) #9 0x0000aaaad3ac73f0 n/a (ntpd + 0x73f0) ELF object binary architecture: AARCH64 Jan 23 00:06:18.325687 amazon-ssm-agent[2162]: 2026-01-23 00:06:18.2085 INFO https_proxy: Jan 23 00:06:18.320870 systemd[1]: ntpd.service: Main process exited, code=dumped, status=11/SEGV Jan 23 00:06:18.321201 systemd[1]: ntpd.service: Failed with result 'core-dump'. Jan 23 00:06:18.329665 systemd[1]: systemd-coredump@0-2043-0.service: Deactivated successfully. Jan 23 00:06:18.416364 amazon-ssm-agent[2162]: 2026-01-23 00:06:18.2085 INFO http_proxy: Jan 23 00:06:18.506850 containerd[2014]: time="2026-01-23T00:06:18.506635668Z" level=info msg="Start subscribing containerd event" Jan 23 00:06:18.506850 containerd[2014]: time="2026-01-23T00:06:18.506773992Z" level=info msg="Start recovering state" Jan 23 00:06:18.507055 containerd[2014]: time="2026-01-23T00:06:18.506914272Z" level=info msg="Start event monitor" Jan 23 00:06:18.507055 containerd[2014]: time="2026-01-23T00:06:18.506942196Z" level=info msg="Start cni network conf syncer for default" Jan 23 00:06:18.507055 containerd[2014]: time="2026-01-23T00:06:18.506965536Z" level=info msg="Start streaming server" Jan 23 00:06:18.507055 containerd[2014]: time="2026-01-23T00:06:18.506984808Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 23 00:06:18.507055 containerd[2014]: time="2026-01-23T00:06:18.507001464Z" level=info msg="runtime interface starting up..." Jan 23 00:06:18.507055 containerd[2014]: time="2026-01-23T00:06:18.507020172Z" level=info msg="starting plugins..." Jan 23 00:06:18.507055 containerd[2014]: time="2026-01-23T00:06:18.507047136Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 23 00:06:18.519626 containerd[2014]: time="2026-01-23T00:06:18.508727316Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 23 00:06:18.519626 containerd[2014]: time="2026-01-23T00:06:18.508839048Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 23 00:06:18.519626 containerd[2014]: time="2026-01-23T00:06:18.508942608Z" level=info msg="containerd successfully booted in 0.761127s" Jan 23 00:06:18.519789 amazon-ssm-agent[2162]: 2026-01-23 00:06:18.2085 INFO no_proxy: Jan 23 00:06:18.509087 systemd[1]: Started containerd.service - containerd container runtime. Jan 23 00:06:18.513925 systemd[1]: ntpd.service: Scheduled restart job, restart counter is at 1. Jan 23 00:06:18.521105 systemd[1]: Started ntpd.service - Network Time Service. Jan 23 00:06:18.619851 amazon-ssm-agent[2162]: 2026-01-23 00:06:18.2090 INFO Checking if agent identity type OnPrem can be assumed Jan 23 00:06:18.628082 ntpd[2214]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:36:07 UTC 2026 (1): Starting Jan 23 00:06:18.630018 ntpd[2214]: 23 Jan 00:06:18 ntpd[2214]: ntpd 4.2.8p18@1.4062-o Thu Jan 22 21:36:07 UTC 2026 (1): Starting Jan 23 00:06:18.630018 ntpd[2214]: 23 Jan 00:06:18 ntpd[2214]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 00:06:18.630018 ntpd[2214]: 23 Jan 00:06:18 ntpd[2214]: ---------------------------------------------------- Jan 23 00:06:18.630018 ntpd[2214]: 23 Jan 00:06:18 ntpd[2214]: ntp-4 is maintained by Network Time Foundation, Jan 23 00:06:18.630018 ntpd[2214]: 23 Jan 00:06:18 ntpd[2214]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 00:06:18.630018 ntpd[2214]: 23 Jan 00:06:18 ntpd[2214]: corporation. Support and training for ntp-4 are Jan 23 00:06:18.630018 ntpd[2214]: 23 Jan 00:06:18 ntpd[2214]: available at https://www.nwtime.org/support Jan 23 00:06:18.630018 ntpd[2214]: 23 Jan 00:06:18 ntpd[2214]: ---------------------------------------------------- Jan 23 00:06:18.630018 ntpd[2214]: 23 Jan 00:06:18 ntpd[2214]: proto: precision = 0.096 usec (-23) Jan 23 00:06:18.628187 ntpd[2214]: Command line: /usr/sbin/ntpd -g -n -u ntp:ntp Jan 23 00:06:18.628205 ntpd[2214]: ---------------------------------------------------- Jan 23 00:06:18.628222 ntpd[2214]: ntp-4 is maintained by Network Time Foundation, Jan 23 00:06:18.628237 ntpd[2214]: Inc. (NTF), a non-profit 501(c)(3) public-benefit Jan 23 00:06:18.628252 ntpd[2214]: corporation. Support and training for ntp-4 are Jan 23 00:06:18.628268 ntpd[2214]: available at https://www.nwtime.org/support Jan 23 00:06:18.628283 ntpd[2214]: ---------------------------------------------------- Jan 23 00:06:18.629320 ntpd[2214]: proto: precision = 0.096 usec (-23) Jan 23 00:06:18.632338 ntpd[2214]: basedate set to 2026-01-10 Jan 23 00:06:18.636662 ntpd[2214]: 23 Jan 00:06:18 ntpd[2214]: basedate set to 2026-01-10 Jan 23 00:06:18.636662 ntpd[2214]: 23 Jan 00:06:18 ntpd[2214]: gps base set to 2026-01-11 (week 2401) Jan 23 00:06:18.636662 ntpd[2214]: 23 Jan 00:06:18 ntpd[2214]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 00:06:18.636662 ntpd[2214]: 23 Jan 00:06:18 ntpd[2214]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 00:06:18.636662 ntpd[2214]: 23 Jan 00:06:18 ntpd[2214]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 00:06:18.636662 ntpd[2214]: 23 Jan 00:06:18 ntpd[2214]: Listen normally on 3 eth0 172.31.18.130:123 Jan 23 00:06:18.636662 ntpd[2214]: 23 Jan 00:06:18 ntpd[2214]: Listen normally on 4 lo [::1]:123 Jan 23 00:06:18.636662 ntpd[2214]: 23 Jan 00:06:18 ntpd[2214]: Listen normally on 5 eth0 [fe80::4e6:acff:fe36:941%2]:123 Jan 23 00:06:18.636662 ntpd[2214]: 23 Jan 00:06:18 ntpd[2214]: Listening on routing socket on fd #22 for interface updates Jan 23 00:06:18.632375 ntpd[2214]: gps base set to 2026-01-11 (week 2401) Jan 23 00:06:18.632551 ntpd[2214]: Listen and drop on 0 v6wildcard [::]:123 Jan 23 00:06:18.632596 ntpd[2214]: Listen and drop on 1 v4wildcard 0.0.0.0:123 Jan 23 00:06:18.632869 ntpd[2214]: Listen normally on 2 lo 127.0.0.1:123 Jan 23 00:06:18.632911 ntpd[2214]: Listen normally on 3 eth0 172.31.18.130:123 Jan 23 00:06:18.632954 ntpd[2214]: Listen normally on 4 lo [::1]:123 Jan 23 00:06:18.632997 ntpd[2214]: Listen normally on 5 eth0 [fe80::4e6:acff:fe36:941%2]:123 Jan 23 00:06:18.633036 ntpd[2214]: Listening on routing socket on fd #22 for interface updates Jan 23 00:06:18.661079 ntpd[2214]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 00:06:18.667660 ntpd[2214]: 23 Jan 00:06:18 ntpd[2214]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 00:06:18.667660 ntpd[2214]: 23 Jan 00:06:18 ntpd[2214]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 00:06:18.661341 ntpd[2214]: kernel reports TIME_ERROR: 0x41: Clock Unsynchronized Jan 23 00:06:18.698019 polkitd[2169]: Started polkitd version 126 Jan 23 00:06:18.721339 amazon-ssm-agent[2162]: 2026-01-23 00:06:18.2090 INFO Checking if agent identity type EC2 can be assumed Jan 23 00:06:18.734243 polkitd[2169]: Loading rules from directory /etc/polkit-1/rules.d Jan 23 00:06:18.738441 polkitd[2169]: Loading rules from directory /run/polkit-1/rules.d Jan 23 00:06:18.741541 polkitd[2169]: Error opening rules directory: Error opening directory “/run/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 00:06:18.743486 polkitd[2169]: Loading rules from directory /usr/local/share/polkit-1/rules.d Jan 23 00:06:18.743575 polkitd[2169]: Error opening rules directory: Error opening directory “/usr/local/share/polkit-1/rules.d”: No such file or directory (g-file-error-quark, 4) Jan 23 00:06:18.743658 polkitd[2169]: Loading rules from directory /usr/share/polkit-1/rules.d Jan 23 00:06:18.751856 polkitd[2169]: Finished loading, compiling and executing 2 rules Jan 23 00:06:18.752766 systemd[1]: Started polkit.service - Authorization Manager. Jan 23 00:06:18.760082 dbus-daemon[1972]: [system] Successfully activated service 'org.freedesktop.PolicyKit1' Jan 23 00:06:18.761810 polkitd[2169]: Acquired the name org.freedesktop.PolicyKit1 on the system bus Jan 23 00:06:18.823518 amazon-ssm-agent[2162]: 2026-01-23 00:06:18.5578 INFO Agent will take identity from EC2 Jan 23 00:06:18.842726 systemd-hostnamed[2038]: Hostname set to (transient) Jan 23 00:06:18.843743 systemd-resolved[1824]: System hostname changed to 'ip-172-31-18-130'. Jan 23 00:06:18.923524 amazon-ssm-agent[2162]: 2026-01-23 00:06:18.5688 INFO [amazon-ssm-agent] amazon-ssm-agent - v3.3.0.0 Jan 23 00:06:19.024570 amazon-ssm-agent[2162]: 2026-01-23 00:06:18.5689 INFO [amazon-ssm-agent] OS: linux, Arch: arm64 Jan 23 00:06:19.124124 amazon-ssm-agent[2162]: 2026-01-23 00:06:18.5689 INFO [amazon-ssm-agent] Starting Core Agent Jan 23 00:06:19.225848 amazon-ssm-agent[2162]: 2026-01-23 00:06:18.5689 INFO [amazon-ssm-agent] Registrar detected. Attempting registration Jan 23 00:06:19.237621 tar[1992]: linux-arm64/README.md Jan 23 00:06:19.277162 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 23 00:06:19.325582 amazon-ssm-agent[2162]: 2026-01-23 00:06:18.5689 INFO [Registrar] Starting registrar module Jan 23 00:06:19.425876 amazon-ssm-agent[2162]: 2026-01-23 00:06:18.5725 INFO [EC2Identity] Checking disk for registration info Jan 23 00:06:19.526157 amazon-ssm-agent[2162]: 2026-01-23 00:06:18.5726 INFO [EC2Identity] No registration info found for ec2 instance, attempting registration Jan 23 00:06:19.626464 amazon-ssm-agent[2162]: 2026-01-23 00:06:18.5726 INFO [EC2Identity] Generating registration keypair Jan 23 00:06:20.014015 amazon-ssm-agent[2162]: 2026-01-23 00:06:20.0106 INFO [EC2Identity] Checking write access before registering Jan 23 00:06:20.062037 amazon-ssm-agent[2162]: 2026/01/23 00:06:20 Found config file at /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 00:06:20.062228 amazon-ssm-agent[2162]: Applying config override from /etc/amazon/ssm/amazon-ssm-agent.json. Jan 23 00:06:20.062549 amazon-ssm-agent[2162]: 2026/01/23 00:06:20 processing appconfig overrides Jan 23 00:06:20.111146 amazon-ssm-agent[2162]: 2026-01-23 00:06:20.0132 INFO [EC2Identity] Registering EC2 instance with Systems Manager Jan 23 00:06:20.111146 amazon-ssm-agent[2162]: 2026-01-23 00:06:20.0617 INFO [EC2Identity] EC2 registration was successful. Jan 23 00:06:20.111146 amazon-ssm-agent[2162]: 2026-01-23 00:06:20.0617 INFO [amazon-ssm-agent] Registration attempted. Resuming core agent startup. Jan 23 00:06:20.111146 amazon-ssm-agent[2162]: 2026-01-23 00:06:20.0618 INFO [CredentialRefresher] credentialRefresher has started Jan 23 00:06:20.111146 amazon-ssm-agent[2162]: 2026-01-23 00:06:20.0618 INFO [CredentialRefresher] Starting credentials refresher loop Jan 23 00:06:20.111146 amazon-ssm-agent[2162]: 2026-01-23 00:06:20.1105 INFO EC2RoleProvider Successfully connected with instance profile role credentials Jan 23 00:06:20.111146 amazon-ssm-agent[2162]: 2026-01-23 00:06:20.1108 INFO [CredentialRefresher] Credentials ready Jan 23 00:06:20.114298 amazon-ssm-agent[2162]: 2026-01-23 00:06:20.1110 INFO [CredentialRefresher] Next credential rotation will be in 29.9999927273 minutes Jan 23 00:06:20.146984 sshd_keygen[2019]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 23 00:06:20.187374 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 23 00:06:20.196670 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 23 00:06:20.201006 systemd[1]: Started sshd@0-172.31.18.130:22-4.153.228.146:53930.service - OpenSSH per-connection server daemon (4.153.228.146:53930). Jan 23 00:06:20.234905 systemd[1]: issuegen.service: Deactivated successfully. Jan 23 00:06:20.235392 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 23 00:06:20.241132 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 23 00:06:20.289304 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 23 00:06:20.299105 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 23 00:06:20.309514 systemd[1]: Started serial-getty@ttyS0.service - Serial Getty on ttyS0. Jan 23 00:06:20.320143 systemd[1]: Reached target getty.target - Login Prompts. Jan 23 00:06:20.697223 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:06:20.701417 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 23 00:06:20.707602 systemd[1]: Startup finished in 3.733s (kernel) + 9.241s (initrd) + 9.841s (userspace) = 22.817s. Jan 23 00:06:20.715296 (kubelet)[2253]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:06:20.812759 sshd[2238]: Accepted publickey for core from 4.153.228.146 port 53930 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:06:20.815903 sshd-session[2238]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:06:20.841673 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 23 00:06:20.844284 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 23 00:06:20.855822 systemd-logind[1983]: New session 1 of user core. Jan 23 00:06:20.880012 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 23 00:06:20.886332 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 23 00:06:20.904930 (systemd)[2260]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 23 00:06:20.912734 systemd-logind[1983]: New session c1 of user core. Jan 23 00:06:21.152738 amazon-ssm-agent[2162]: 2026-01-23 00:06:21.1507 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker is not running, starting worker process Jan 23 00:06:21.207963 systemd[2260]: Queued start job for default target default.target. Jan 23 00:06:21.223839 systemd[2260]: Created slice app.slice - User Application Slice. Jan 23 00:06:21.224083 systemd[2260]: Reached target paths.target - Paths. Jan 23 00:06:21.224178 systemd[2260]: Reached target timers.target - Timers. Jan 23 00:06:21.226792 systemd[2260]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 23 00:06:21.253694 amazon-ssm-agent[2162]: 2026-01-23 00:06:21.1726 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] [WorkerProvider] Worker ssm-agent-worker (pid:2273) started Jan 23 00:06:21.263856 systemd[2260]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 23 00:06:21.264384 systemd[2260]: Reached target sockets.target - Sockets. Jan 23 00:06:21.264664 systemd[2260]: Reached target basic.target - Basic System. Jan 23 00:06:21.264875 systemd[2260]: Reached target default.target - Main User Target. Jan 23 00:06:21.265026 systemd[2260]: Startup finished in 336ms. Jan 23 00:06:21.266769 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 23 00:06:21.272804 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 23 00:06:21.354686 amazon-ssm-agent[2162]: 2026-01-23 00:06:21.1727 INFO [amazon-ssm-agent] [LongRunningWorkerContainer] Monitor long running worker health every 60 seconds Jan 23 00:06:21.661904 systemd[1]: Started sshd@1-172.31.18.130:22-4.153.228.146:53942.service - OpenSSH per-connection server daemon (4.153.228.146:53942). Jan 23 00:06:21.911285 kubelet[2253]: E0123 00:06:21.911189 2253 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:06:21.915474 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:06:21.915830 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:06:21.918686 systemd[1]: kubelet.service: Consumed 1.349s CPU time, 249.1M memory peak. Jan 23 00:06:22.309237 sshd[2290]: Accepted publickey for core from 4.153.228.146 port 53942 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:06:22.311789 sshd-session[2290]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:06:22.322548 systemd-logind[1983]: New session 2 of user core. Jan 23 00:06:22.331790 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 23 00:06:22.694693 sshd[2295]: Connection closed by 4.153.228.146 port 53942 Jan 23 00:06:22.695674 sshd-session[2290]: pam_unix(sshd:session): session closed for user core Jan 23 00:06:22.703205 systemd[1]: sshd@1-172.31.18.130:22-4.153.228.146:53942.service: Deactivated successfully. Jan 23 00:06:22.706832 systemd[1]: session-2.scope: Deactivated successfully. Jan 23 00:06:22.709250 systemd-logind[1983]: Session 2 logged out. Waiting for processes to exit. Jan 23 00:06:22.711677 systemd-logind[1983]: Removed session 2. Jan 23 00:06:22.782818 systemd[1]: Started sshd@2-172.31.18.130:22-4.153.228.146:53950.service - OpenSSH per-connection server daemon (4.153.228.146:53950). Jan 23 00:06:23.299568 sshd[2301]: Accepted publickey for core from 4.153.228.146 port 53950 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:06:23.301198 sshd-session[2301]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:06:23.310592 systemd-logind[1983]: New session 3 of user core. Jan 23 00:06:23.317798 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 23 00:06:23.646843 sshd[2304]: Connection closed by 4.153.228.146 port 53950 Jan 23 00:06:23.645634 sshd-session[2301]: pam_unix(sshd:session): session closed for user core Jan 23 00:06:23.652753 systemd[1]: sshd@2-172.31.18.130:22-4.153.228.146:53950.service: Deactivated successfully. Jan 23 00:06:23.656336 systemd[1]: session-3.scope: Deactivated successfully. Jan 23 00:06:23.658464 systemd-logind[1983]: Session 3 logged out. Waiting for processes to exit. Jan 23 00:06:23.661588 systemd-logind[1983]: Removed session 3. Jan 23 00:06:23.736610 systemd[1]: Started sshd@3-172.31.18.130:22-4.153.228.146:53964.service - OpenSSH per-connection server daemon (4.153.228.146:53964). Jan 23 00:06:24.258966 sshd[2310]: Accepted publickey for core from 4.153.228.146 port 53964 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:06:24.261261 sshd-session[2310]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:06:24.269831 systemd-logind[1983]: New session 4 of user core. Jan 23 00:06:24.279807 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 23 00:06:24.618717 sshd[2313]: Connection closed by 4.153.228.146 port 53964 Jan 23 00:06:24.619822 sshd-session[2310]: pam_unix(sshd:session): session closed for user core Jan 23 00:06:24.627780 systemd[1]: sshd@3-172.31.18.130:22-4.153.228.146:53964.service: Deactivated successfully. Jan 23 00:06:24.632564 systemd[1]: session-4.scope: Deactivated successfully. Jan 23 00:06:24.634628 systemd-logind[1983]: Session 4 logged out. Waiting for processes to exit. Jan 23 00:06:24.638298 systemd-logind[1983]: Removed session 4. Jan 23 00:06:24.715347 systemd[1]: Started sshd@4-172.31.18.130:22-4.153.228.146:33072.service - OpenSSH per-connection server daemon (4.153.228.146:33072). Jan 23 00:06:25.241031 sshd[2319]: Accepted publickey for core from 4.153.228.146 port 33072 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:06:25.243288 sshd-session[2319]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:06:25.252603 systemd-logind[1983]: New session 5 of user core. Jan 23 00:06:25.263803 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 23 00:06:25.557944 sudo[2323]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 23 00:06:25.558637 sudo[2323]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:06:25.574880 sudo[2323]: pam_unix(sudo:session): session closed for user root Jan 23 00:06:25.654556 sshd[2322]: Connection closed by 4.153.228.146 port 33072 Jan 23 00:06:25.654166 sshd-session[2319]: pam_unix(sshd:session): session closed for user core Jan 23 00:06:25.662126 systemd-logind[1983]: Session 5 logged out. Waiting for processes to exit. Jan 23 00:06:25.662629 systemd[1]: sshd@4-172.31.18.130:22-4.153.228.146:33072.service: Deactivated successfully. Jan 23 00:06:25.666004 systemd[1]: session-5.scope: Deactivated successfully. Jan 23 00:06:25.668823 systemd-logind[1983]: Removed session 5. Jan 23 00:06:25.746707 systemd[1]: Started sshd@5-172.31.18.130:22-4.153.228.146:33080.service - OpenSSH per-connection server daemon (4.153.228.146:33080). Jan 23 00:06:26.258697 sshd[2329]: Accepted publickey for core from 4.153.228.146 port 33080 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:06:26.260832 sshd-session[2329]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:06:26.269586 systemd-logind[1983]: New session 6 of user core. Jan 23 00:06:26.275751 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 23 00:06:26.534438 sudo[2334]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 23 00:06:26.535980 sudo[2334]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:06:26.544387 sudo[2334]: pam_unix(sudo:session): session closed for user root Jan 23 00:06:26.555063 sudo[2333]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 23 00:06:26.555740 sudo[2333]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:06:26.574642 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 23 00:06:26.649334 augenrules[2356]: No rules Jan 23 00:06:26.651487 systemd[1]: audit-rules.service: Deactivated successfully. Jan 23 00:06:26.651969 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 23 00:06:26.654155 sudo[2333]: pam_unix(sudo:session): session closed for user root Jan 23 00:06:26.731439 sshd[2332]: Connection closed by 4.153.228.146 port 33080 Jan 23 00:06:26.732193 sshd-session[2329]: pam_unix(sshd:session): session closed for user core Jan 23 00:06:26.738972 systemd[1]: sshd@5-172.31.18.130:22-4.153.228.146:33080.service: Deactivated successfully. Jan 23 00:06:26.742375 systemd[1]: session-6.scope: Deactivated successfully. Jan 23 00:06:26.744485 systemd-logind[1983]: Session 6 logged out. Waiting for processes to exit. Jan 23 00:06:26.746846 systemd-logind[1983]: Removed session 6. Jan 23 00:06:26.826653 systemd[1]: Started sshd@6-172.31.18.130:22-4.153.228.146:33082.service - OpenSSH per-connection server daemon (4.153.228.146:33082). Jan 23 00:06:27.364138 sshd[2365]: Accepted publickey for core from 4.153.228.146 port 33082 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:06:27.366429 sshd-session[2365]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:06:27.374428 systemd-logind[1983]: New session 7 of user core. Jan 23 00:06:27.386763 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 23 00:06:27.644190 sudo[2369]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 23 00:06:27.644894 sudo[2369]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 23 00:06:28.349846 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 23 00:06:28.364248 (dockerd)[2387]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 23 00:06:28.987391 dockerd[2387]: time="2026-01-23T00:06:28.987287583Z" level=info msg="Starting up" Jan 23 00:06:28.991363 dockerd[2387]: time="2026-01-23T00:06:28.991143762Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 23 00:06:29.011735 dockerd[2387]: time="2026-01-23T00:06:29.011672056Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 23 00:06:29.037146 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport4006999258-merged.mount: Deactivated successfully. Jan 23 00:06:29.071009 dockerd[2387]: time="2026-01-23T00:06:29.070722657Z" level=info msg="Loading containers: start." Jan 23 00:06:29.092549 kernel: Initializing XFRM netlink socket Jan 23 00:06:29.491293 (udev-worker)[2409]: Network interface NamePolicy= disabled on kernel command line. Jan 23 00:06:29.573155 systemd-networkd[1822]: docker0: Link UP Jan 23 00:06:29.578732 dockerd[2387]: time="2026-01-23T00:06:29.578660954Z" level=info msg="Loading containers: done." Jan 23 00:06:29.606147 dockerd[2387]: time="2026-01-23T00:06:29.606048353Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 23 00:06:29.606356 dockerd[2387]: time="2026-01-23T00:06:29.606224161Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 23 00:06:29.606502 dockerd[2387]: time="2026-01-23T00:06:29.606450860Z" level=info msg="Initializing buildkit" Jan 23 00:06:29.648645 dockerd[2387]: time="2026-01-23T00:06:29.648565824Z" level=info msg="Completed buildkit initialization" Jan 23 00:06:29.663675 dockerd[2387]: time="2026-01-23T00:06:29.663555434Z" level=info msg="Daemon has completed initialization" Jan 23 00:06:29.663946 dockerd[2387]: time="2026-01-23T00:06:29.663868069Z" level=info msg="API listen on /run/docker.sock" Jan 23 00:06:29.665359 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 23 00:06:30.031219 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3300892408-merged.mount: Deactivated successfully. Jan 23 00:06:31.043243 containerd[2014]: time="2026-01-23T00:06:31.043144622Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\"" Jan 23 00:06:31.596648 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3691873742.mount: Deactivated successfully. Jan 23 00:06:32.166402 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 23 00:06:32.169123 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:06:32.632952 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:06:32.648226 (kubelet)[2665]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:06:32.742519 kubelet[2665]: E0123 00:06:32.742396 2665 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:06:32.751463 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:06:32.751826 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:06:32.754764 systemd[1]: kubelet.service: Consumed 338ms CPU time, 107.1M memory peak. Jan 23 00:06:33.185212 containerd[2014]: time="2026-01-23T00:06:33.185154396Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:06:33.188957 containerd[2014]: time="2026-01-23T00:06:33.188653969Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.34.3: active requests=0, bytes read=24571040" Jan 23 00:06:33.191889 containerd[2014]: time="2026-01-23T00:06:33.191827390Z" level=info msg="ImageCreate event name:\"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:06:33.197927 containerd[2014]: time="2026-01-23T00:06:33.197875508Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:06:33.200098 containerd[2014]: time="2026-01-23T00:06:33.199828947Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.34.3\" with image id \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\", repo tag \"registry.k8s.io/kube-apiserver:v1.34.3\", repo digest \"registry.k8s.io/kube-apiserver@sha256:5af1030676ceca025742ef5e73a504d11b59be0e5551cdb8c9cf0d3c1231b460\", size \"24567639\" in 2.156610345s" Jan 23 00:06:33.200098 containerd[2014]: time="2026-01-23T00:06:33.199884755Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.34.3\" returns image reference \"sha256:cf65ae6c8f700cc27f57b7305c6e2b71276a7eed943c559a0091e1e667169896\"" Jan 23 00:06:33.201052 containerd[2014]: time="2026-01-23T00:06:33.201013198Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\"" Jan 23 00:06:34.810597 containerd[2014]: time="2026-01-23T00:06:34.810518672Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:06:34.813113 containerd[2014]: time="2026-01-23T00:06:34.813044788Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.34.3: active requests=0, bytes read=19135477" Jan 23 00:06:34.814676 containerd[2014]: time="2026-01-23T00:06:34.814617070Z" level=info msg="ImageCreate event name:\"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:06:34.820444 containerd[2014]: time="2026-01-23T00:06:34.820359545Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:06:34.822538 containerd[2014]: time="2026-01-23T00:06:34.822293817Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.34.3\" with image id \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\", repo tag \"registry.k8s.io/kube-controller-manager:v1.34.3\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:716a210d31ee5e27053ea0e1a3a3deb4910791a85ba4b1120410b5a4cbcf1954\", size \"20719958\" in 1.621035006s" Jan 23 00:06:34.822538 containerd[2014]: time="2026-01-23T00:06:34.822352768Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.34.3\" returns image reference \"sha256:7ada8ff13e54bf42ca66f146b54cd7b1757797d93b3b9ba06df034cdddb5ab22\"" Jan 23 00:06:34.823413 containerd[2014]: time="2026-01-23T00:06:34.822904972Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\"" Jan 23 00:06:35.966032 containerd[2014]: time="2026-01-23T00:06:35.965954452Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:06:35.968733 containerd[2014]: time="2026-01-23T00:06:35.968668970Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.34.3: active requests=0, bytes read=14191716" Jan 23 00:06:35.969664 containerd[2014]: time="2026-01-23T00:06:35.969609672Z" level=info msg="ImageCreate event name:\"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:06:35.975423 containerd[2014]: time="2026-01-23T00:06:35.975333916Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:06:35.977766 containerd[2014]: time="2026-01-23T00:06:35.977188057Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.34.3\" with image id \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\", repo tag \"registry.k8s.io/kube-scheduler:v1.34.3\", repo digest \"registry.k8s.io/kube-scheduler@sha256:f9a9bc7948fd804ef02255fe82ac2e85d2a66534bae2fe1348c14849260a1fe2\", size \"15776215\" in 1.154238791s" Jan 23 00:06:35.977766 containerd[2014]: time="2026-01-23T00:06:35.977268573Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.34.3\" returns image reference \"sha256:2f2aa21d34d2db37a290752f34faf1d41087c02e18aa9d046a8b4ba1e29421a6\"" Jan 23 00:06:35.977950 containerd[2014]: time="2026-01-23T00:06:35.977854599Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\"" Jan 23 00:06:37.265475 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3076466342.mount: Deactivated successfully. Jan 23 00:06:37.681025 containerd[2014]: time="2026-01-23T00:06:37.680966850Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.34.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:06:37.682681 containerd[2014]: time="2026-01-23T00:06:37.682640205Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.34.3: active requests=0, bytes read=22805253" Jan 23 00:06:37.683137 containerd[2014]: time="2026-01-23T00:06:37.683098628Z" level=info msg="ImageCreate event name:\"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:06:37.686415 containerd[2014]: time="2026-01-23T00:06:37.686306939Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:06:37.687483 containerd[2014]: time="2026-01-23T00:06:37.687436210Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.34.3\" with image id \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\", repo tag \"registry.k8s.io/kube-proxy:v1.34.3\", repo digest \"registry.k8s.io/kube-proxy@sha256:7298ab89a103523d02ff4f49bedf9359710af61df92efdc07bac873064f03ed6\", size \"22804272\" in 1.709531859s" Jan 23 00:06:37.687759 containerd[2014]: time="2026-01-23T00:06:37.687612030Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.34.3\" returns image reference \"sha256:4461daf6b6af87cf200fc22cecc9a2120959aabaf5712ba54ef5b4a6361d1162\"" Jan 23 00:06:37.688581 containerd[2014]: time="2026-01-23T00:06:37.688533373Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\"" Jan 23 00:06:38.192862 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2457556802.mount: Deactivated successfully. Jan 23 00:06:39.311575 containerd[2014]: time="2026-01-23T00:06:39.311508085Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:06:39.313197 containerd[2014]: time="2026-01-23T00:06:39.312854915Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.1: active requests=0, bytes read=20395406" Jan 23 00:06:39.314661 containerd[2014]: time="2026-01-23T00:06:39.314602429Z" level=info msg="ImageCreate event name:\"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:06:39.319849 containerd[2014]: time="2026-01-23T00:06:39.319783453Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:06:39.321945 containerd[2014]: time="2026-01-23T00:06:39.321897264Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.1\" with image id \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.1\", repo digest \"registry.k8s.io/coredns/coredns@sha256:e8c262566636e6bc340ece6473b0eed193cad045384401529721ddbe6463d31c\", size \"20392204\" in 1.633205486s" Jan 23 00:06:39.322117 containerd[2014]: time="2026-01-23T00:06:39.322085605Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.1\" returns image reference \"sha256:138784d87c9c50f8e59412544da4cf4928d61ccbaf93b9f5898a3ba406871bfc\"" Jan 23 00:06:39.323527 containerd[2014]: time="2026-01-23T00:06:39.322991224Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\"" Jan 23 00:06:39.774654 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4044396224.mount: Deactivated successfully. Jan 23 00:06:39.783658 containerd[2014]: time="2026-01-23T00:06:39.783598139Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10.1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:06:39.786367 containerd[2014]: time="2026-01-23T00:06:39.786300795Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10.1: active requests=0, bytes read=268709" Jan 23 00:06:39.787722 containerd[2014]: time="2026-01-23T00:06:39.787661874Z" level=info msg="ImageCreate event name:\"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:06:39.795523 containerd[2014]: time="2026-01-23T00:06:39.795059090Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:06:39.798034 containerd[2014]: time="2026-01-23T00:06:39.797971425Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10.1\" with image id \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\", repo tag \"registry.k8s.io/pause:3.10.1\", repo digest \"registry.k8s.io/pause@sha256:278fb9dbcca9518083ad1e11276933a2e96f23de604a3a08cc3c80002767d24c\", size \"267939\" in 474.918132ms" Jan 23 00:06:39.798262 containerd[2014]: time="2026-01-23T00:06:39.798215155Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10.1\" returns image reference \"sha256:d7b100cd9a77ba782c5e428c8dd5a1df4a1e79d4cb6294acd7d01290ab3babbd\"" Jan 23 00:06:39.800290 containerd[2014]: time="2026-01-23T00:06:39.800217733Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\"" Jan 23 00:06:40.336533 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount535842407.mount: Deactivated successfully. Jan 23 00:06:42.818027 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 23 00:06:42.822656 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:06:43.239804 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:06:43.258061 (kubelet)[2804]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 23 00:06:43.352199 kubelet[2804]: E0123 00:06:43.352109 2804 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 23 00:06:43.359677 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 23 00:06:43.359979 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 23 00:06:43.360977 systemd[1]: kubelet.service: Consumed 340ms CPU time, 105.2M memory peak. Jan 23 00:06:44.262221 containerd[2014]: time="2026-01-23T00:06:44.262136151Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.6.4-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:06:44.265328 containerd[2014]: time="2026-01-23T00:06:44.265260108Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.6.4-0: active requests=0, bytes read=98062987" Jan 23 00:06:44.266524 containerd[2014]: time="2026-01-23T00:06:44.266448582Z" level=info msg="ImageCreate event name:\"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:06:44.277524 containerd[2014]: time="2026-01-23T00:06:44.277200039Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:06:44.281709 containerd[2014]: time="2026-01-23T00:06:44.281645280Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.6.4-0\" with image id \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\", repo tag \"registry.k8s.io/etcd:3.6.4-0\", repo digest \"registry.k8s.io/etcd@sha256:e36c081683425b5b3bc1425bc508b37e7107bb65dfa9367bf5a80125d431fa19\", size \"98207481\" in 4.481363475s" Jan 23 00:06:44.281890 containerd[2014]: time="2026-01-23T00:06:44.281860716Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.6.4-0\" returns image reference \"sha256:a1894772a478e07c67a56e8bf32335fdbe1dd4ec96976a5987083164bd00bc0e\"" Jan 23 00:06:48.880762 systemd[1]: systemd-hostnamed.service: Deactivated successfully. Jan 23 00:06:52.700045 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:06:52.700396 systemd[1]: kubelet.service: Consumed 340ms CPU time, 105.2M memory peak. Jan 23 00:06:52.704058 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:06:52.762405 systemd[1]: Reload requested from client PID 2843 ('systemctl') (unit session-7.scope)... Jan 23 00:06:52.762435 systemd[1]: Reloading... Jan 23 00:06:53.007534 zram_generator::config[2890]: No configuration found. Jan 23 00:06:53.472472 systemd[1]: Reloading finished in 709 ms. Jan 23 00:06:53.575866 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 23 00:06:53.576057 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 23 00:06:53.576711 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:06:53.576795 systemd[1]: kubelet.service: Consumed 230ms CPU time, 95M memory peak. Jan 23 00:06:53.581223 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:06:53.942539 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:06:53.963062 (kubelet)[2951]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 00:06:54.039820 kubelet[2951]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 00:06:54.039820 kubelet[2951]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 00:06:54.040320 kubelet[2951]: I0123 00:06:54.039878 2951 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 00:06:55.555525 kubelet[2951]: I0123 00:06:55.554803 2951 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 00:06:55.555525 kubelet[2951]: I0123 00:06:55.554862 2951 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 00:06:55.558106 kubelet[2951]: I0123 00:06:55.558048 2951 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 00:06:55.558584 kubelet[2951]: I0123 00:06:55.558556 2951 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 00:06:55.559131 kubelet[2951]: I0123 00:06:55.559104 2951 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 00:06:55.570237 kubelet[2951]: E0123 00:06:55.570171 2951 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://172.31.18.130:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 172.31.18.130:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 00:06:55.572420 kubelet[2951]: I0123 00:06:55.572186 2951 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 00:06:55.579132 kubelet[2951]: I0123 00:06:55.579088 2951 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 00:06:55.584519 kubelet[2951]: I0123 00:06:55.584437 2951 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 00:06:55.584937 kubelet[2951]: I0123 00:06:55.584890 2951 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 00:06:55.585192 kubelet[2951]: I0123 00:06:55.584937 2951 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-130","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 00:06:55.585375 kubelet[2951]: I0123 00:06:55.585193 2951 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 00:06:55.585375 kubelet[2951]: I0123 00:06:55.585212 2951 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 00:06:55.585484 kubelet[2951]: I0123 00:06:55.585377 2951 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 00:06:55.590019 kubelet[2951]: I0123 00:06:55.589978 2951 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:06:55.592572 kubelet[2951]: I0123 00:06:55.592512 2951 kubelet.go:475] "Attempting to sync node with API server" Jan 23 00:06:55.592572 kubelet[2951]: I0123 00:06:55.592560 2951 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 00:06:55.593544 kubelet[2951]: E0123 00:06:55.593454 2951 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.18.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-130&limit=500&resourceVersion=0\": dial tcp 172.31.18.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 00:06:55.594519 kubelet[2951]: I0123 00:06:55.593686 2951 kubelet.go:387] "Adding apiserver pod source" Jan 23 00:06:55.594519 kubelet[2951]: I0123 00:06:55.593734 2951 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 00:06:55.596111 kubelet[2951]: E0123 00:06:55.596016 2951 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.18.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 00:06:55.596375 kubelet[2951]: I0123 00:06:55.596333 2951 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 00:06:55.597470 kubelet[2951]: I0123 00:06:55.597410 2951 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 00:06:55.597592 kubelet[2951]: I0123 00:06:55.597474 2951 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 00:06:55.597592 kubelet[2951]: W0123 00:06:55.597574 2951 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 23 00:06:55.603153 kubelet[2951]: I0123 00:06:55.603106 2951 server.go:1262] "Started kubelet" Jan 23 00:06:55.605078 kubelet[2951]: I0123 00:06:55.605000 2951 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 00:06:55.608828 kubelet[2951]: I0123 00:06:55.608679 2951 server.go:310] "Adding debug handlers to kubelet server" Jan 23 00:06:55.611394 kubelet[2951]: I0123 00:06:55.611097 2951 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 00:06:55.621973 kubelet[2951]: I0123 00:06:55.621931 2951 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 00:06:55.622562 kubelet[2951]: I0123 00:06:55.622447 2951 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 00:06:55.622683 kubelet[2951]: I0123 00:06:55.622585 2951 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 00:06:55.622939 kubelet[2951]: I0123 00:06:55.622903 2951 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 00:06:55.625770 kubelet[2951]: I0123 00:06:55.625740 2951 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 00:06:55.626260 kubelet[2951]: E0123 00:06:55.626230 2951 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-18-130\" not found" Jan 23 00:06:55.633286 kubelet[2951]: E0123 00:06:55.631053 2951 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://172.31.18.130:6443/api/v1/namespaces/default/events\": dial tcp 172.31.18.130:6443: connect: connection refused" event="&Event{ObjectMeta:{ip-172-31-18-130.188d33772ea378b9 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ip-172-31-18-130,UID:ip-172-31-18-130,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ip-172-31-18-130,},FirstTimestamp:2026-01-23 00:06:55.603062969 +0000 UTC m=+1.633173271,LastTimestamp:2026-01-23 00:06:55.603062969 +0000 UTC m=+1.633173271,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ip-172-31-18-130,}" Jan 23 00:06:55.633556 kubelet[2951]: E0123 00:06:55.633388 2951 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-130?timeout=10s\": dial tcp 172.31.18.130:6443: connect: connection refused" interval="200ms" Jan 23 00:06:55.636576 kubelet[2951]: I0123 00:06:55.636060 2951 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 00:06:55.639559 kubelet[2951]: I0123 00:06:55.638328 2951 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 00:06:55.639717 kubelet[2951]: E0123 00:06:55.639687 2951 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.18.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 00:06:55.639949 kubelet[2951]: I0123 00:06:55.639908 2951 factory.go:223] Registration of the containerd container factory successfully Jan 23 00:06:55.639949 kubelet[2951]: I0123 00:06:55.639940 2951 factory.go:223] Registration of the systemd container factory successfully Jan 23 00:06:55.641648 kubelet[2951]: I0123 00:06:55.641614 2951 reconciler.go:29] "Reconciler: start to sync state" Jan 23 00:06:55.674316 kubelet[2951]: I0123 00:06:55.674283 2951 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 00:06:55.674559 kubelet[2951]: I0123 00:06:55.674483 2951 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 00:06:55.674696 kubelet[2951]: I0123 00:06:55.674679 2951 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:06:55.678459 kubelet[2951]: I0123 00:06:55.678429 2951 policy_none.go:49] "None policy: Start" Jan 23 00:06:55.678697 kubelet[2951]: I0123 00:06:55.678677 2951 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 00:06:55.678898 kubelet[2951]: I0123 00:06:55.678862 2951 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 00:06:55.682012 kubelet[2951]: I0123 00:06:55.681979 2951 policy_none.go:47] "Start" Jan 23 00:06:55.683590 kubelet[2951]: I0123 00:06:55.683550 2951 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 00:06:55.689232 kubelet[2951]: I0123 00:06:55.689195 2951 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 00:06:55.689435 kubelet[2951]: I0123 00:06:55.689415 2951 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 00:06:55.689635 kubelet[2951]: I0123 00:06:55.689616 2951 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 00:06:55.689821 kubelet[2951]: E0123 00:06:55.689783 2951 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 00:06:55.693905 kubelet[2951]: E0123 00:06:55.692312 2951 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.18.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 00:06:55.698779 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 23 00:06:55.719181 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 23 00:06:55.726976 kubelet[2951]: E0123 00:06:55.726520 2951 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-18-130\" not found" Jan 23 00:06:55.726684 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 23 00:06:55.738474 kubelet[2951]: E0123 00:06:55.738130 2951 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 00:06:55.740284 kubelet[2951]: I0123 00:06:55.740232 2951 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 00:06:55.740421 kubelet[2951]: I0123 00:06:55.740270 2951 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 00:06:55.742521 kubelet[2951]: I0123 00:06:55.740820 2951 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 00:06:55.745942 kubelet[2951]: E0123 00:06:55.745826 2951 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 00:06:55.745942 kubelet[2951]: E0123 00:06:55.745894 2951 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ip-172-31-18-130\" not found" Jan 23 00:06:55.815258 systemd[1]: Created slice kubepods-burstable-poda047342ff09243393e640e96e7df8e17.slice - libcontainer container kubepods-burstable-poda047342ff09243393e640e96e7df8e17.slice. Jan 23 00:06:55.834155 kubelet[2951]: E0123 00:06:55.834059 2951 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-130?timeout=10s\": dial tcp 172.31.18.130:6443: connect: connection refused" interval="400ms" Jan 23 00:06:55.837936 kubelet[2951]: E0123 00:06:55.837856 2951 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-130\" not found" node="ip-172-31-18-130" Jan 23 00:06:55.843076 kubelet[2951]: I0123 00:06:55.843038 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/877bd30f95818579b75786c752879174-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-130\" (UID: \"877bd30f95818579b75786c752879174\") " pod="kube-system/kube-controller-manager-ip-172-31-18-130" Jan 23 00:06:55.843354 kubelet[2951]: I0123 00:06:55.843328 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/877bd30f95818579b75786c752879174-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-130\" (UID: \"877bd30f95818579b75786c752879174\") " pod="kube-system/kube-controller-manager-ip-172-31-18-130" Jan 23 00:06:55.843721 kubelet[2951]: I0123 00:06:55.843631 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/877bd30f95818579b75786c752879174-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-130\" (UID: \"877bd30f95818579b75786c752879174\") " pod="kube-system/kube-controller-manager-ip-172-31-18-130" Jan 23 00:06:55.843992 kubelet[2951]: I0123 00:06:55.843944 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/746ce7ddd3dced72878f4c4bf8cb4e75-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-130\" (UID: \"746ce7ddd3dced72878f4c4bf8cb4e75\") " pod="kube-system/kube-scheduler-ip-172-31-18-130" Jan 23 00:06:55.844393 kubelet[2951]: I0123 00:06:55.844363 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a047342ff09243393e640e96e7df8e17-ca-certs\") pod \"kube-apiserver-ip-172-31-18-130\" (UID: \"a047342ff09243393e640e96e7df8e17\") " pod="kube-system/kube-apiserver-ip-172-31-18-130" Jan 23 00:06:55.844915 kubelet[2951]: I0123 00:06:55.844881 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a047342ff09243393e640e96e7df8e17-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-130\" (UID: \"a047342ff09243393e640e96e7df8e17\") " pod="kube-system/kube-apiserver-ip-172-31-18-130" Jan 23 00:06:55.845262 kubelet[2951]: I0123 00:06:55.845189 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a047342ff09243393e640e96e7df8e17-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-130\" (UID: \"a047342ff09243393e640e96e7df8e17\") " pod="kube-system/kube-apiserver-ip-172-31-18-130" Jan 23 00:06:55.845459 kubelet[2951]: I0123 00:06:55.845371 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/877bd30f95818579b75786c752879174-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-130\" (UID: \"877bd30f95818579b75786c752879174\") " pod="kube-system/kube-controller-manager-ip-172-31-18-130" Jan 23 00:06:55.845459 kubelet[2951]: I0123 00:06:55.845411 2951 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-130" Jan 23 00:06:55.845869 kubelet[2951]: I0123 00:06:55.845425 2951 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/877bd30f95818579b75786c752879174-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-130\" (UID: \"877bd30f95818579b75786c752879174\") " pod="kube-system/kube-controller-manager-ip-172-31-18-130" Jan 23 00:06:55.846137 kubelet[2951]: E0123 00:06:55.846085 2951 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.130:6443/api/v1/nodes\": dial tcp 172.31.18.130:6443: connect: connection refused" node="ip-172-31-18-130" Jan 23 00:06:55.848418 systemd[1]: Created slice kubepods-burstable-pod877bd30f95818579b75786c752879174.slice - libcontainer container kubepods-burstable-pod877bd30f95818579b75786c752879174.slice. Jan 23 00:06:55.859979 kubelet[2951]: E0123 00:06:55.859616 2951 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-130\" not found" node="ip-172-31-18-130" Jan 23 00:06:55.865897 systemd[1]: Created slice kubepods-burstable-pod746ce7ddd3dced72878f4c4bf8cb4e75.slice - libcontainer container kubepods-burstable-pod746ce7ddd3dced72878f4c4bf8cb4e75.slice. Jan 23 00:06:55.870012 kubelet[2951]: E0123 00:06:55.869683 2951 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-130\" not found" node="ip-172-31-18-130" Jan 23 00:06:56.048451 kubelet[2951]: I0123 00:06:56.048418 2951 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-130" Jan 23 00:06:56.049106 kubelet[2951]: E0123 00:06:56.049055 2951 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.130:6443/api/v1/nodes\": dial tcp 172.31.18.130:6443: connect: connection refused" node="ip-172-31-18-130" Jan 23 00:06:56.143559 containerd[2014]: time="2026-01-23T00:06:56.143457666Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-130,Uid:a047342ff09243393e640e96e7df8e17,Namespace:kube-system,Attempt:0,}" Jan 23 00:06:56.163704 containerd[2014]: time="2026-01-23T00:06:56.163440941Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-130,Uid:877bd30f95818579b75786c752879174,Namespace:kube-system,Attempt:0,}" Jan 23 00:06:56.174120 containerd[2014]: time="2026-01-23T00:06:56.174028885Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-130,Uid:746ce7ddd3dced72878f4c4bf8cb4e75,Namespace:kube-system,Attempt:0,}" Jan 23 00:06:56.235058 kubelet[2951]: E0123 00:06:56.234987 2951 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-130?timeout=10s\": dial tcp 172.31.18.130:6443: connect: connection refused" interval="800ms" Jan 23 00:06:56.451474 kubelet[2951]: I0123 00:06:56.451280 2951 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-130" Jan 23 00:06:56.452252 kubelet[2951]: E0123 00:06:56.452198 2951 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.130:6443/api/v1/nodes\": dial tcp 172.31.18.130:6443: connect: connection refused" node="ip-172-31-18-130" Jan 23 00:06:56.486887 kubelet[2951]: E0123 00:06:56.486830 2951 reflector.go:205] "Failed to watch" err="failed to list *v1.Service: Get \"https://172.31.18.130:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 172.31.18.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 00:06:56.628017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3845997179.mount: Deactivated successfully. Jan 23 00:06:56.638528 containerd[2014]: time="2026-01-23T00:06:56.638435223Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:06:56.642087 containerd[2014]: time="2026-01-23T00:06:56.642014580Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=268703" Jan 23 00:06:56.645750 containerd[2014]: time="2026-01-23T00:06:56.645674345Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:06:56.647715 containerd[2014]: time="2026-01-23T00:06:56.647647298Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 00:06:56.647973 containerd[2014]: time="2026-01-23T00:06:56.647919310Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:06:56.650058 containerd[2014]: time="2026-01-23T00:06:56.649980731Z" level=info msg="ImageUpdate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:06:56.651234 containerd[2014]: time="2026-01-23T00:06:56.651176245Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 23 00:06:56.657089 containerd[2014]: time="2026-01-23T00:06:56.656642619Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 23 00:06:56.659118 containerd[2014]: time="2026-01-23T00:06:56.659070516Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 483.597674ms" Jan 23 00:06:56.660897 containerd[2014]: time="2026-01-23T00:06:56.660836057Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 514.656196ms" Jan 23 00:06:56.663529 containerd[2014]: time="2026-01-23T00:06:56.663346245Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 498.269286ms" Jan 23 00:06:56.710236 kubelet[2951]: E0123 00:06:56.709148 2951 reflector.go:205] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://172.31.18.130:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 172.31.18.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 00:06:56.726453 containerd[2014]: time="2026-01-23T00:06:56.726355165Z" level=info msg="connecting to shim 0e0bebd83bc5352751456c4093e47c425a0ed9148228242aa66481ec72883d05" address="unix:///run/containerd/s/ce01ae9628162ffe250699adff47b8a9473f4dc28981542f95b114c934a9fbe8" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:06:56.735609 containerd[2014]: time="2026-01-23T00:06:56.735469993Z" level=info msg="connecting to shim 75419eb930dd10afe915db9f93f7181a6f606416a641289fb89ed4bcf44f7c90" address="unix:///run/containerd/s/95ba6a2bdbd32b06949110c484e0d53cd06eaf27bc55e682a059cca5ee4845e9" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:06:56.757949 containerd[2014]: time="2026-01-23T00:06:56.757815811Z" level=info msg="connecting to shim 2716ef6bb024a53fb959880973b278733aeee7454689bf619a2f2929f5e675d1" address="unix:///run/containerd/s/5addb1488a7cc4b7811e252b6c5b9d437f768bdf939ddc624b000f7ba5177f99" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:06:56.790899 systemd[1]: Started cri-containerd-0e0bebd83bc5352751456c4093e47c425a0ed9148228242aa66481ec72883d05.scope - libcontainer container 0e0bebd83bc5352751456c4093e47c425a0ed9148228242aa66481ec72883d05. Jan 23 00:06:56.814895 systemd[1]: Started cri-containerd-75419eb930dd10afe915db9f93f7181a6f606416a641289fb89ed4bcf44f7c90.scope - libcontainer container 75419eb930dd10afe915db9f93f7181a6f606416a641289fb89ed4bcf44f7c90. Jan 23 00:06:56.849843 systemd[1]: Started cri-containerd-2716ef6bb024a53fb959880973b278733aeee7454689bf619a2f2929f5e675d1.scope - libcontainer container 2716ef6bb024a53fb959880973b278733aeee7454689bf619a2f2929f5e675d1. Jan 23 00:06:56.887562 kubelet[2951]: E0123 00:06:56.887359 2951 reflector.go:205] "Failed to watch" err="failed to list *v1.Node: Get \"https://172.31.18.130:6443/api/v1/nodes?fieldSelector=metadata.name%3Dip-172-31-18-130&limit=500&resourceVersion=0\": dial tcp 172.31.18.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 00:06:56.950121 containerd[2014]: time="2026-01-23T00:06:56.949713874Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ip-172-31-18-130,Uid:a047342ff09243393e640e96e7df8e17,Namespace:kube-system,Attempt:0,} returns sandbox id \"75419eb930dd10afe915db9f93f7181a6f606416a641289fb89ed4bcf44f7c90\"" Jan 23 00:06:56.971014 containerd[2014]: time="2026-01-23T00:06:56.970862187Z" level=info msg="CreateContainer within sandbox \"75419eb930dd10afe915db9f93f7181a6f606416a641289fb89ed4bcf44f7c90\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 23 00:06:56.988038 containerd[2014]: time="2026-01-23T00:06:56.987963952Z" level=info msg="Container f314944b10f8d361701f97efbb049e87eec26a7b3fb71764a18b47d7086c1082: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:06:56.997465 containerd[2014]: time="2026-01-23T00:06:56.997178630Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ip-172-31-18-130,Uid:877bd30f95818579b75786c752879174,Namespace:kube-system,Attempt:0,} returns sandbox id \"2716ef6bb024a53fb959880973b278733aeee7454689bf619a2f2929f5e675d1\"" Jan 23 00:06:57.003290 containerd[2014]: time="2026-01-23T00:06:57.003133148Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ip-172-31-18-130,Uid:746ce7ddd3dced72878f4c4bf8cb4e75,Namespace:kube-system,Attempt:0,} returns sandbox id \"0e0bebd83bc5352751456c4093e47c425a0ed9148228242aa66481ec72883d05\"" Jan 23 00:06:57.008170 containerd[2014]: time="2026-01-23T00:06:57.008118778Z" level=info msg="CreateContainer within sandbox \"2716ef6bb024a53fb959880973b278733aeee7454689bf619a2f2929f5e675d1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 23 00:06:57.012525 containerd[2014]: time="2026-01-23T00:06:57.012260390Z" level=info msg="CreateContainer within sandbox \"0e0bebd83bc5352751456c4093e47c425a0ed9148228242aa66481ec72883d05\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 23 00:06:57.015878 containerd[2014]: time="2026-01-23T00:06:57.015825091Z" level=info msg="CreateContainer within sandbox \"75419eb930dd10afe915db9f93f7181a6f606416a641289fb89ed4bcf44f7c90\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"f314944b10f8d361701f97efbb049e87eec26a7b3fb71764a18b47d7086c1082\"" Jan 23 00:06:57.026531 containerd[2014]: time="2026-01-23T00:06:57.025845766Z" level=info msg="Container 59695cb77765d3dd70e1f5cb307d3559ace321001fcb7603a6d303345b3e1cb6: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:06:57.036325 kubelet[2951]: E0123 00:06:57.036263 2951 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://172.31.18.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-130?timeout=10s\": dial tcp 172.31.18.130:6443: connect: connection refused" interval="1.6s" Jan 23 00:06:57.039372 containerd[2014]: time="2026-01-23T00:06:57.039316299Z" level=info msg="StartContainer for \"f314944b10f8d361701f97efbb049e87eec26a7b3fb71764a18b47d7086c1082\"" Jan 23 00:06:57.042063 containerd[2014]: time="2026-01-23T00:06:57.041949978Z" level=info msg="connecting to shim f314944b10f8d361701f97efbb049e87eec26a7b3fb71764a18b47d7086c1082" address="unix:///run/containerd/s/95ba6a2bdbd32b06949110c484e0d53cd06eaf27bc55e682a059cca5ee4845e9" protocol=ttrpc version=3 Jan 23 00:06:57.049215 containerd[2014]: time="2026-01-23T00:06:57.047909412Z" level=info msg="Container c991e5ff3ac42b53c26813f8a743c6afde0e248dfd018efd56d86ec84cdc880b: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:06:57.056709 containerd[2014]: time="2026-01-23T00:06:57.056653446Z" level=info msg="CreateContainer within sandbox \"0e0bebd83bc5352751456c4093e47c425a0ed9148228242aa66481ec72883d05\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"59695cb77765d3dd70e1f5cb307d3559ace321001fcb7603a6d303345b3e1cb6\"" Jan 23 00:06:57.057749 containerd[2014]: time="2026-01-23T00:06:57.057705224Z" level=info msg="StartContainer for \"59695cb77765d3dd70e1f5cb307d3559ace321001fcb7603a6d303345b3e1cb6\"" Jan 23 00:06:57.061043 containerd[2014]: time="2026-01-23T00:06:57.060987034Z" level=info msg="connecting to shim 59695cb77765d3dd70e1f5cb307d3559ace321001fcb7603a6d303345b3e1cb6" address="unix:///run/containerd/s/ce01ae9628162ffe250699adff47b8a9473f4dc28981542f95b114c934a9fbe8" protocol=ttrpc version=3 Jan 23 00:06:57.073179 containerd[2014]: time="2026-01-23T00:06:57.073097832Z" level=info msg="CreateContainer within sandbox \"2716ef6bb024a53fb959880973b278733aeee7454689bf619a2f2929f5e675d1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"c991e5ff3ac42b53c26813f8a743c6afde0e248dfd018efd56d86ec84cdc880b\"" Jan 23 00:06:57.074813 containerd[2014]: time="2026-01-23T00:06:57.074454929Z" level=info msg="StartContainer for \"c991e5ff3ac42b53c26813f8a743c6afde0e248dfd018efd56d86ec84cdc880b\"" Jan 23 00:06:57.080185 containerd[2014]: time="2026-01-23T00:06:57.079917896Z" level=info msg="connecting to shim c991e5ff3ac42b53c26813f8a743c6afde0e248dfd018efd56d86ec84cdc880b" address="unix:///run/containerd/s/5addb1488a7cc4b7811e252b6c5b9d437f768bdf939ddc624b000f7ba5177f99" protocol=ttrpc version=3 Jan 23 00:06:57.085868 systemd[1]: Started cri-containerd-f314944b10f8d361701f97efbb049e87eec26a7b3fb71764a18b47d7086c1082.scope - libcontainer container f314944b10f8d361701f97efbb049e87eec26a7b3fb71764a18b47d7086c1082. Jan 23 00:06:57.091348 kubelet[2951]: E0123 00:06:57.091182 2951 reflector.go:205] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://172.31.18.130:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 172.31.18.130:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 00:06:57.124790 systemd[1]: Started cri-containerd-59695cb77765d3dd70e1f5cb307d3559ace321001fcb7603a6d303345b3e1cb6.scope - libcontainer container 59695cb77765d3dd70e1f5cb307d3559ace321001fcb7603a6d303345b3e1cb6. Jan 23 00:06:57.140808 systemd[1]: Started cri-containerd-c991e5ff3ac42b53c26813f8a743c6afde0e248dfd018efd56d86ec84cdc880b.scope - libcontainer container c991e5ff3ac42b53c26813f8a743c6afde0e248dfd018efd56d86ec84cdc880b. Jan 23 00:06:57.245454 containerd[2014]: time="2026-01-23T00:06:57.245187097Z" level=info msg="StartContainer for \"f314944b10f8d361701f97efbb049e87eec26a7b3fb71764a18b47d7086c1082\" returns successfully" Jan 23 00:06:57.261618 kubelet[2951]: I0123 00:06:57.261539 2951 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-130" Jan 23 00:06:57.263672 kubelet[2951]: E0123 00:06:57.263420 2951 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://172.31.18.130:6443/api/v1/nodes\": dial tcp 172.31.18.130:6443: connect: connection refused" node="ip-172-31-18-130" Jan 23 00:06:57.299924 containerd[2014]: time="2026-01-23T00:06:57.299754448Z" level=info msg="StartContainer for \"59695cb77765d3dd70e1f5cb307d3559ace321001fcb7603a6d303345b3e1cb6\" returns successfully" Jan 23 00:06:57.321069 containerd[2014]: time="2026-01-23T00:06:57.320920871Z" level=info msg="StartContainer for \"c991e5ff3ac42b53c26813f8a743c6afde0e248dfd018efd56d86ec84cdc880b\" returns successfully" Jan 23 00:06:57.714853 kubelet[2951]: E0123 00:06:57.714800 2951 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-130\" not found" node="ip-172-31-18-130" Jan 23 00:06:57.730323 kubelet[2951]: E0123 00:06:57.730264 2951 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-130\" not found" node="ip-172-31-18-130" Jan 23 00:06:57.737590 kubelet[2951]: E0123 00:06:57.736981 2951 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-130\" not found" node="ip-172-31-18-130" Jan 23 00:06:58.734436 kubelet[2951]: E0123 00:06:58.734234 2951 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-130\" not found" node="ip-172-31-18-130" Jan 23 00:06:58.735050 kubelet[2951]: E0123 00:06:58.734982 2951 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-130\" not found" node="ip-172-31-18-130" Jan 23 00:06:58.867600 kubelet[2951]: I0123 00:06:58.867552 2951 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-130" Jan 23 00:06:59.737267 kubelet[2951]: E0123 00:06:59.737216 2951 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-130\" not found" node="ip-172-31-18-130" Jan 23 00:07:00.333409 kubelet[2951]: E0123 00:07:00.333344 2951 kubelet.go:3215] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ip-172-31-18-130\" not found" node="ip-172-31-18-130" Jan 23 00:07:01.267110 kubelet[2951]: E0123 00:07:01.267054 2951 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ip-172-31-18-130\" not found" node="ip-172-31-18-130" Jan 23 00:07:01.320848 kubelet[2951]: I0123 00:07:01.320781 2951 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-18-130" Jan 23 00:07:01.320848 kubelet[2951]: E0123 00:07:01.320846 2951 kubelet_node_status.go:486] "Error updating node status, will retry" err="error getting node \"ip-172-31-18-130\": node \"ip-172-31-18-130\" not found" Jan 23 00:07:01.327325 kubelet[2951]: I0123 00:07:01.327260 2951 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-130" Jan 23 00:07:01.436701 kubelet[2951]: E0123 00:07:01.436612 2951 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-scheduler-ip-172-31-18-130\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ip-172-31-18-130" Jan 23 00:07:01.436701 kubelet[2951]: I0123 00:07:01.436664 2951 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-130" Jan 23 00:07:01.451916 kubelet[2951]: E0123 00:07:01.451854 2951 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-130\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ip-172-31-18-130" Jan 23 00:07:01.451916 kubelet[2951]: I0123 00:07:01.451903 2951 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-130" Jan 23 00:07:01.468431 kubelet[2951]: E0123 00:07:01.468361 2951 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-18-130\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ip-172-31-18-130" Jan 23 00:07:01.599245 kubelet[2951]: I0123 00:07:01.599172 2951 apiserver.go:52] "Watching apiserver" Jan 23 00:07:01.638993 kubelet[2951]: I0123 00:07:01.638932 2951 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 00:07:01.895807 update_engine[1988]: I20260123 00:07:01.894537 1988 update_attempter.cc:509] Updating boot flags... Jan 23 00:07:06.138678 systemd[1]: Reload requested from client PID 3512 ('systemctl') (unit session-7.scope)... Jan 23 00:07:06.138708 systemd[1]: Reloading... Jan 23 00:07:06.353529 zram_generator::config[3562]: No configuration found. Jan 23 00:07:06.414128 kubelet[2951]: I0123 00:07:06.413671 2951 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-130" Jan 23 00:07:06.850362 systemd[1]: Reloading finished in 711 ms. Jan 23 00:07:06.916670 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:07:06.934207 systemd[1]: kubelet.service: Deactivated successfully. Jan 23 00:07:06.935025 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:07:06.935287 systemd[1]: kubelet.service: Consumed 2.601s CPU time, 124M memory peak. Jan 23 00:07:06.940182 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 23 00:07:07.340813 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 23 00:07:07.359388 (kubelet)[3616]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 23 00:07:07.461607 kubelet[3616]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 00:07:07.461607 kubelet[3616]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 00:07:07.462103 kubelet[3616]: I0123 00:07:07.461611 3616 server.go:213] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 00:07:07.481527 kubelet[3616]: I0123 00:07:07.481433 3616 server.go:529] "Kubelet version" kubeletVersion="v1.34.1" Jan 23 00:07:07.483534 kubelet[3616]: I0123 00:07:07.482590 3616 server.go:531] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 00:07:07.483534 kubelet[3616]: I0123 00:07:07.482697 3616 watchdog_linux.go:95] "Systemd watchdog is not enabled" Jan 23 00:07:07.483534 kubelet[3616]: I0123 00:07:07.482714 3616 watchdog_linux.go:137] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 00:07:07.483716 kubelet[3616]: I0123 00:07:07.483660 3616 server.go:956] "Client rotation is on, will bootstrap in background" Jan 23 00:07:07.489600 kubelet[3616]: I0123 00:07:07.489475 3616 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 00:07:07.496593 kubelet[3616]: I0123 00:07:07.495970 3616 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 23 00:07:07.507521 kubelet[3616]: I0123 00:07:07.507463 3616 server.go:1423] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 00:07:07.513132 kubelet[3616]: I0123 00:07:07.513072 3616 server.go:781] "--cgroups-per-qos enabled, but --cgroup-root was not specified. Defaulting to /" Jan 23 00:07:07.513710 kubelet[3616]: I0123 00:07:07.513669 3616 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 00:07:07.514081 kubelet[3616]: I0123 00:07:07.513825 3616 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ip-172-31-18-130","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 00:07:07.514484 kubelet[3616]: I0123 00:07:07.514240 3616 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 00:07:07.514484 kubelet[3616]: I0123 00:07:07.514264 3616 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 00:07:07.514484 kubelet[3616]: I0123 00:07:07.514310 3616 container_manager_linux.go:315] "Creating Dynamic Resource Allocation (DRA) manager" Jan 23 00:07:07.517248 kubelet[3616]: I0123 00:07:07.517215 3616 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:07:07.517724 kubelet[3616]: I0123 00:07:07.517704 3616 kubelet.go:475] "Attempting to sync node with API server" Jan 23 00:07:07.518752 kubelet[3616]: I0123 00:07:07.518709 3616 kubelet.go:376] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 00:07:07.518896 kubelet[3616]: I0123 00:07:07.518789 3616 kubelet.go:387] "Adding apiserver pod source" Jan 23 00:07:07.518896 kubelet[3616]: I0123 00:07:07.518812 3616 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 00:07:07.523062 kubelet[3616]: I0123 00:07:07.522425 3616 kuberuntime_manager.go:291] "Container runtime initialized" containerRuntime="containerd" version="v2.0.7" apiVersion="v1" Jan 23 00:07:07.523602 kubelet[3616]: I0123 00:07:07.523410 3616 kubelet.go:940] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 00:07:07.523602 kubelet[3616]: I0123 00:07:07.523473 3616 kubelet.go:964] "Not starting PodCertificateRequest manager because we are in static kubelet mode or the PodCertificateProjection feature gate is disabled" Jan 23 00:07:07.530468 kubelet[3616]: I0123 00:07:07.529767 3616 server.go:1262] "Started kubelet" Jan 23 00:07:07.533623 kubelet[3616]: I0123 00:07:07.533430 3616 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 00:07:07.547781 kubelet[3616]: I0123 00:07:07.547627 3616 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 00:07:07.549298 kubelet[3616]: I0123 00:07:07.549250 3616 server.go:310] "Adding debug handlers to kubelet server" Jan 23 00:07:07.577650 kubelet[3616]: I0123 00:07:07.577556 3616 ratelimit.go:56] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 00:07:07.577796 kubelet[3616]: I0123 00:07:07.577672 3616 server_v1.go:49] "podresources" method="list" useActivePods=true Jan 23 00:07:07.579408 kubelet[3616]: I0123 00:07:07.577987 3616 server.go:249] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 00:07:07.579900 kubelet[3616]: I0123 00:07:07.579854 3616 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 23 00:07:07.587669 kubelet[3616]: I0123 00:07:07.587578 3616 volume_manager.go:313] "Starting Kubelet Volume Manager" Jan 23 00:07:07.589846 kubelet[3616]: E0123 00:07:07.589786 3616 kubelet_node_status.go:404] "Error getting the current node from lister" err="node \"ip-172-31-18-130\" not found" Jan 23 00:07:07.602579 kubelet[3616]: I0123 00:07:07.602424 3616 desired_state_of_world_populator.go:146] "Desired state populator starts to run" Jan 23 00:07:07.603905 kubelet[3616]: I0123 00:07:07.603847 3616 reconciler.go:29] "Reconciler: start to sync state" Jan 23 00:07:07.610765 kubelet[3616]: I0123 00:07:07.610071 3616 factory.go:223] Registration of the systemd container factory successfully Jan 23 00:07:07.610765 kubelet[3616]: I0123 00:07:07.610257 3616 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 23 00:07:07.616857 kubelet[3616]: E0123 00:07:07.616750 3616 kubelet.go:1615] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 23 00:07:07.618542 kubelet[3616]: I0123 00:07:07.618454 3616 factory.go:223] Registration of the containerd container factory successfully Jan 23 00:07:07.626571 kubelet[3616]: I0123 00:07:07.626485 3616 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv4" Jan 23 00:07:07.629455 kubelet[3616]: I0123 00:07:07.629419 3616 kubelet_network_linux.go:54] "Initialized iptables rules." protocol="IPv6" Jan 23 00:07:07.630143 kubelet[3616]: I0123 00:07:07.629612 3616 status_manager.go:244] "Starting to sync pod status with apiserver" Jan 23 00:07:07.630143 kubelet[3616]: I0123 00:07:07.629649 3616 kubelet.go:2427] "Starting kubelet main sync loop" Jan 23 00:07:07.630143 kubelet[3616]: E0123 00:07:07.629714 3616 kubelet.go:2451] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 00:07:07.729821 kubelet[3616]: E0123 00:07:07.729769 3616 kubelet.go:2451] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 23 00:07:07.758197 kubelet[3616]: I0123 00:07:07.758076 3616 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 23 00:07:07.758953 kubelet[3616]: I0123 00:07:07.758908 3616 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 23 00:07:07.759078 kubelet[3616]: I0123 00:07:07.758972 3616 state_mem.go:36] "Initialized new in-memory state store" Jan 23 00:07:07.759335 kubelet[3616]: I0123 00:07:07.759200 3616 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 23 00:07:07.759335 kubelet[3616]: I0123 00:07:07.759233 3616 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 23 00:07:07.759335 kubelet[3616]: I0123 00:07:07.759267 3616 policy_none.go:49] "None policy: Start" Jan 23 00:07:07.759518 kubelet[3616]: I0123 00:07:07.759285 3616 memory_manager.go:187] "Starting memorymanager" policy="None" Jan 23 00:07:07.759518 kubelet[3616]: I0123 00:07:07.759426 3616 state_mem.go:36] "Initializing new in-memory state store" logger="Memory Manager state checkpoint" Jan 23 00:07:07.760070 kubelet[3616]: I0123 00:07:07.759659 3616 state_mem.go:77] "Updated machine memory state" logger="Memory Manager state checkpoint" Jan 23 00:07:07.760070 kubelet[3616]: I0123 00:07:07.759689 3616 policy_none.go:47] "Start" Jan 23 00:07:07.774539 kubelet[3616]: E0123 00:07:07.774281 3616 manager.go:513] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 00:07:07.775004 kubelet[3616]: I0123 00:07:07.774787 3616 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 00:07:07.775004 kubelet[3616]: I0123 00:07:07.774820 3616 container_log_manager.go:146] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 00:07:07.777070 kubelet[3616]: I0123 00:07:07.776982 3616 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 00:07:07.781182 kubelet[3616]: E0123 00:07:07.780722 3616 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 23 00:07:07.901724 kubelet[3616]: I0123 00:07:07.901585 3616 kubelet_node_status.go:75] "Attempting to register node" node="ip-172-31-18-130" Jan 23 00:07:07.917532 kubelet[3616]: I0123 00:07:07.917234 3616 kubelet_node_status.go:124] "Node was previously registered" node="ip-172-31-18-130" Jan 23 00:07:07.917532 kubelet[3616]: I0123 00:07:07.917348 3616 kubelet_node_status.go:78] "Successfully registered node" node="ip-172-31-18-130" Jan 23 00:07:07.934121 kubelet[3616]: I0123 00:07:07.933427 3616 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-130" Jan 23 00:07:07.937579 kubelet[3616]: I0123 00:07:07.937353 3616 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ip-172-31-18-130" Jan 23 00:07:07.939920 kubelet[3616]: I0123 00:07:07.939286 3616 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ip-172-31-18-130" Jan 23 00:07:07.953614 kubelet[3616]: E0123 00:07:07.953485 3616 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-apiserver-ip-172-31-18-130\" already exists" pod="kube-system/kube-apiserver-ip-172-31-18-130" Jan 23 00:07:08.006037 kubelet[3616]: I0123 00:07:08.005420 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/746ce7ddd3dced72878f4c4bf8cb4e75-kubeconfig\") pod \"kube-scheduler-ip-172-31-18-130\" (UID: \"746ce7ddd3dced72878f4c4bf8cb4e75\") " pod="kube-system/kube-scheduler-ip-172-31-18-130" Jan 23 00:07:08.006037 kubelet[3616]: I0123 00:07:08.005485 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/a047342ff09243393e640e96e7df8e17-ca-certs\") pod \"kube-apiserver-ip-172-31-18-130\" (UID: \"a047342ff09243393e640e96e7df8e17\") " pod="kube-system/kube-apiserver-ip-172-31-18-130" Jan 23 00:07:08.006037 kubelet[3616]: I0123 00:07:08.005545 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/a047342ff09243393e640e96e7df8e17-k8s-certs\") pod \"kube-apiserver-ip-172-31-18-130\" (UID: \"a047342ff09243393e640e96e7df8e17\") " pod="kube-system/kube-apiserver-ip-172-31-18-130" Jan 23 00:07:08.006037 kubelet[3616]: I0123 00:07:08.005595 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/877bd30f95818579b75786c752879174-ca-certs\") pod \"kube-controller-manager-ip-172-31-18-130\" (UID: \"877bd30f95818579b75786c752879174\") " pod="kube-system/kube-controller-manager-ip-172-31-18-130" Jan 23 00:07:08.006037 kubelet[3616]: I0123 00:07:08.005631 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/877bd30f95818579b75786c752879174-k8s-certs\") pod \"kube-controller-manager-ip-172-31-18-130\" (UID: \"877bd30f95818579b75786c752879174\") " pod="kube-system/kube-controller-manager-ip-172-31-18-130" Jan 23 00:07:08.006385 kubelet[3616]: I0123 00:07:08.005683 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/877bd30f95818579b75786c752879174-usr-share-ca-certificates\") pod \"kube-controller-manager-ip-172-31-18-130\" (UID: \"877bd30f95818579b75786c752879174\") " pod="kube-system/kube-controller-manager-ip-172-31-18-130" Jan 23 00:07:08.006385 kubelet[3616]: I0123 00:07:08.005723 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/a047342ff09243393e640e96e7df8e17-usr-share-ca-certificates\") pod \"kube-apiserver-ip-172-31-18-130\" (UID: \"a047342ff09243393e640e96e7df8e17\") " pod="kube-system/kube-apiserver-ip-172-31-18-130" Jan 23 00:07:08.006385 kubelet[3616]: I0123 00:07:08.005758 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/877bd30f95818579b75786c752879174-flexvolume-dir\") pod \"kube-controller-manager-ip-172-31-18-130\" (UID: \"877bd30f95818579b75786c752879174\") " pod="kube-system/kube-controller-manager-ip-172-31-18-130" Jan 23 00:07:08.006385 kubelet[3616]: I0123 00:07:08.005794 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/877bd30f95818579b75786c752879174-kubeconfig\") pod \"kube-controller-manager-ip-172-31-18-130\" (UID: \"877bd30f95818579b75786c752879174\") " pod="kube-system/kube-controller-manager-ip-172-31-18-130" Jan 23 00:07:08.520152 kubelet[3616]: I0123 00:07:08.520084 3616 apiserver.go:52] "Watching apiserver" Jan 23 00:07:08.603082 kubelet[3616]: I0123 00:07:08.603011 3616 desired_state_of_world_populator.go:154] "Finished populating initial desired state of world" Jan 23 00:07:08.711118 kubelet[3616]: I0123 00:07:08.711052 3616 kubelet.go:3219] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ip-172-31-18-130" Jan 23 00:07:08.724276 kubelet[3616]: E0123 00:07:08.724142 3616 kubelet.go:3221] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ip-172-31-18-130\" already exists" pod="kube-system/kube-controller-manager-ip-172-31-18-130" Jan 23 00:07:08.781291 kubelet[3616]: I0123 00:07:08.780013 3616 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ip-172-31-18-130" podStartSLOduration=1.779991168 podStartE2EDuration="1.779991168s" podCreationTimestamp="2026-01-23 00:07:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:07:08.764287196 +0000 UTC m=+1.390524491" watchObservedRunningTime="2026-01-23 00:07:08.779991168 +0000 UTC m=+1.406228463" Jan 23 00:07:08.810765 kubelet[3616]: I0123 00:07:08.810672 3616 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ip-172-31-18-130" podStartSLOduration=1.81064746 podStartE2EDuration="1.81064746s" podCreationTimestamp="2026-01-23 00:07:07 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:07:08.781917033 +0000 UTC m=+1.408154340" watchObservedRunningTime="2026-01-23 00:07:08.81064746 +0000 UTC m=+1.436884755" Jan 23 00:07:08.837527 kubelet[3616]: I0123 00:07:08.837429 3616 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ip-172-31-18-130" podStartSLOduration=2.837408209 podStartE2EDuration="2.837408209s" podCreationTimestamp="2026-01-23 00:07:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:07:08.813066206 +0000 UTC m=+1.439303537" watchObservedRunningTime="2026-01-23 00:07:08.837408209 +0000 UTC m=+1.463645504" Jan 23 00:07:09.436758 kubelet[3616]: I0123 00:07:09.436702 3616 kuberuntime_manager.go:1828] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 23 00:07:09.437280 containerd[2014]: time="2026-01-23T00:07:09.437213171Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 23 00:07:09.438269 kubelet[3616]: I0123 00:07:09.437882 3616 kubelet_network.go:47] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 23 00:07:10.441563 systemd[1]: Created slice kubepods-besteffort-pod9796dfd1_14a0_4299_9ff6_c7db8c93ac6b.slice - libcontainer container kubepods-besteffort-pod9796dfd1_14a0_4299_9ff6_c7db8c93ac6b.slice. Jan 23 00:07:10.520943 kubelet[3616]: I0123 00:07:10.520882 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/9796dfd1-14a0-4299-9ff6-c7db8c93ac6b-lib-modules\") pod \"kube-proxy-bs2js\" (UID: \"9796dfd1-14a0-4299-9ff6-c7db8c93ac6b\") " pod="kube-system/kube-proxy-bs2js" Jan 23 00:07:10.520943 kubelet[3616]: I0123 00:07:10.520953 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/9796dfd1-14a0-4299-9ff6-c7db8c93ac6b-kube-proxy\") pod \"kube-proxy-bs2js\" (UID: \"9796dfd1-14a0-4299-9ff6-c7db8c93ac6b\") " pod="kube-system/kube-proxy-bs2js" Jan 23 00:07:10.522646 kubelet[3616]: I0123 00:07:10.520996 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/9796dfd1-14a0-4299-9ff6-c7db8c93ac6b-xtables-lock\") pod \"kube-proxy-bs2js\" (UID: \"9796dfd1-14a0-4299-9ff6-c7db8c93ac6b\") " pod="kube-system/kube-proxy-bs2js" Jan 23 00:07:10.522646 kubelet[3616]: I0123 00:07:10.521032 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvwm4\" (UniqueName: \"kubernetes.io/projected/9796dfd1-14a0-4299-9ff6-c7db8c93ac6b-kube-api-access-dvwm4\") pod \"kube-proxy-bs2js\" (UID: \"9796dfd1-14a0-4299-9ff6-c7db8c93ac6b\") " pod="kube-system/kube-proxy-bs2js" Jan 23 00:07:10.592350 systemd[1]: Created slice kubepods-besteffort-podc3a52394_8874_499e_80dd_a505c33670e9.slice - libcontainer container kubepods-besteffort-podc3a52394_8874_499e_80dd_a505c33670e9.slice. Jan 23 00:07:10.621880 kubelet[3616]: I0123 00:07:10.621813 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrrh6\" (UniqueName: \"kubernetes.io/projected/c3a52394-8874-499e-80dd-a505c33670e9-kube-api-access-nrrh6\") pod \"tigera-operator-65cdcdfd6d-xzvjm\" (UID: \"c3a52394-8874-499e-80dd-a505c33670e9\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-xzvjm" Jan 23 00:07:10.622376 kubelet[3616]: I0123 00:07:10.622314 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/c3a52394-8874-499e-80dd-a505c33670e9-var-lib-calico\") pod \"tigera-operator-65cdcdfd6d-xzvjm\" (UID: \"c3a52394-8874-499e-80dd-a505c33670e9\") " pod="tigera-operator/tigera-operator-65cdcdfd6d-xzvjm" Jan 23 00:07:10.756636 containerd[2014]: time="2026-01-23T00:07:10.756127136Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bs2js,Uid:9796dfd1-14a0-4299-9ff6-c7db8c93ac6b,Namespace:kube-system,Attempt:0,}" Jan 23 00:07:10.785246 containerd[2014]: time="2026-01-23T00:07:10.784813545Z" level=info msg="connecting to shim a952f617b591a708972ea41697ef2d79587502061eb783e145b7f2fae731a4fd" address="unix:///run/containerd/s/c3f42fdacddc1978315d35db2b36734fc35767f9d062e1fabea203a3e88965bb" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:07:10.843800 systemd[1]: Started cri-containerd-a952f617b591a708972ea41697ef2d79587502061eb783e145b7f2fae731a4fd.scope - libcontainer container a952f617b591a708972ea41697ef2d79587502061eb783e145b7f2fae731a4fd. Jan 23 00:07:10.892079 containerd[2014]: time="2026-01-23T00:07:10.892008373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-bs2js,Uid:9796dfd1-14a0-4299-9ff6-c7db8c93ac6b,Namespace:kube-system,Attempt:0,} returns sandbox id \"a952f617b591a708972ea41697ef2d79587502061eb783e145b7f2fae731a4fd\"" Jan 23 00:07:10.903640 containerd[2014]: time="2026-01-23T00:07:10.903589540Z" level=info msg="CreateContainer within sandbox \"a952f617b591a708972ea41697ef2d79587502061eb783e145b7f2fae731a4fd\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 23 00:07:10.908528 containerd[2014]: time="2026-01-23T00:07:10.907340040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-xzvjm,Uid:c3a52394-8874-499e-80dd-a505c33670e9,Namespace:tigera-operator,Attempt:0,}" Jan 23 00:07:10.931197 containerd[2014]: time="2026-01-23T00:07:10.931129263Z" level=info msg="Container 53cc52e8291522e7ae360f502d7f48221959b070946287c7ae0cd79892912696: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:07:10.948045 containerd[2014]: time="2026-01-23T00:07:10.947951460Z" level=info msg="CreateContainer within sandbox \"a952f617b591a708972ea41697ef2d79587502061eb783e145b7f2fae731a4fd\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"53cc52e8291522e7ae360f502d7f48221959b070946287c7ae0cd79892912696\"" Jan 23 00:07:10.950738 containerd[2014]: time="2026-01-23T00:07:10.950644509Z" level=info msg="StartContainer for \"53cc52e8291522e7ae360f502d7f48221959b070946287c7ae0cd79892912696\"" Jan 23 00:07:10.953848 containerd[2014]: time="2026-01-23T00:07:10.953791423Z" level=info msg="connecting to shim 80b35f2d2418acef6452a39ab99854173d5d19a89e04dfce63b83c61deeffcba" address="unix:///run/containerd/s/1a2f7e10d93a63f78e8384340d022cb9d4a5aa4b7fd76c7e4914d115a5e85cf1" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:07:10.956958 containerd[2014]: time="2026-01-23T00:07:10.956888801Z" level=info msg="connecting to shim 53cc52e8291522e7ae360f502d7f48221959b070946287c7ae0cd79892912696" address="unix:///run/containerd/s/c3f42fdacddc1978315d35db2b36734fc35767f9d062e1fabea203a3e88965bb" protocol=ttrpc version=3 Jan 23 00:07:10.992218 systemd[1]: Started cri-containerd-53cc52e8291522e7ae360f502d7f48221959b070946287c7ae0cd79892912696.scope - libcontainer container 53cc52e8291522e7ae360f502d7f48221959b070946287c7ae0cd79892912696. Jan 23 00:07:11.013954 systemd[1]: Started cri-containerd-80b35f2d2418acef6452a39ab99854173d5d19a89e04dfce63b83c61deeffcba.scope - libcontainer container 80b35f2d2418acef6452a39ab99854173d5d19a89e04dfce63b83c61deeffcba. Jan 23 00:07:11.103476 containerd[2014]: time="2026-01-23T00:07:11.103018228Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-65cdcdfd6d-xzvjm,Uid:c3a52394-8874-499e-80dd-a505c33670e9,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"80b35f2d2418acef6452a39ab99854173d5d19a89e04dfce63b83c61deeffcba\"" Jan 23 00:07:11.112058 containerd[2014]: time="2026-01-23T00:07:11.111993626Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 23 00:07:11.140259 containerd[2014]: time="2026-01-23T00:07:11.140062597Z" level=info msg="StartContainer for \"53cc52e8291522e7ae360f502d7f48221959b070946287c7ae0cd79892912696\" returns successfully" Jan 23 00:07:11.772133 kubelet[3616]: I0123 00:07:11.771996 3616 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-bs2js" podStartSLOduration=1.771975136 podStartE2EDuration="1.771975136s" podCreationTimestamp="2026-01-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:07:11.749851112 +0000 UTC m=+4.376088503" watchObservedRunningTime="2026-01-23 00:07:11.771975136 +0000 UTC m=+4.398212443" Jan 23 00:07:12.065964 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4277818588.mount: Deactivated successfully. Jan 23 00:07:12.891038 containerd[2014]: time="2026-01-23T00:07:12.889733481Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:12.891038 containerd[2014]: time="2026-01-23T00:07:12.890990164Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=22152004" Jan 23 00:07:12.892393 containerd[2014]: time="2026-01-23T00:07:12.892347262Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:12.896018 containerd[2014]: time="2026-01-23T00:07:12.895955093Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:12.897467 containerd[2014]: time="2026-01-23T00:07:12.897422655Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 1.785363373s" Jan 23 00:07:12.897673 containerd[2014]: time="2026-01-23T00:07:12.897640478Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 23 00:07:12.905859 containerd[2014]: time="2026-01-23T00:07:12.905799228Z" level=info msg="CreateContainer within sandbox \"80b35f2d2418acef6452a39ab99854173d5d19a89e04dfce63b83c61deeffcba\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 23 00:07:12.922772 containerd[2014]: time="2026-01-23T00:07:12.922700034Z" level=info msg="Container 6d167f1a66f985f7504b5a7c278e8e7bdcd4f43395fff56b962aedaab984e034: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:07:12.927303 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1331620995.mount: Deactivated successfully. Jan 23 00:07:12.932791 containerd[2014]: time="2026-01-23T00:07:12.932698724Z" level=info msg="CreateContainer within sandbox \"80b35f2d2418acef6452a39ab99854173d5d19a89e04dfce63b83c61deeffcba\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"6d167f1a66f985f7504b5a7c278e8e7bdcd4f43395fff56b962aedaab984e034\"" Jan 23 00:07:12.934258 containerd[2014]: time="2026-01-23T00:07:12.933943509Z" level=info msg="StartContainer for \"6d167f1a66f985f7504b5a7c278e8e7bdcd4f43395fff56b962aedaab984e034\"" Jan 23 00:07:12.938898 containerd[2014]: time="2026-01-23T00:07:12.938440408Z" level=info msg="connecting to shim 6d167f1a66f985f7504b5a7c278e8e7bdcd4f43395fff56b962aedaab984e034" address="unix:///run/containerd/s/1a2f7e10d93a63f78e8384340d022cb9d4a5aa4b7fd76c7e4914d115a5e85cf1" protocol=ttrpc version=3 Jan 23 00:07:12.976759 systemd[1]: Started cri-containerd-6d167f1a66f985f7504b5a7c278e8e7bdcd4f43395fff56b962aedaab984e034.scope - libcontainer container 6d167f1a66f985f7504b5a7c278e8e7bdcd4f43395fff56b962aedaab984e034. Jan 23 00:07:13.037080 containerd[2014]: time="2026-01-23T00:07:13.036922434Z" level=info msg="StartContainer for \"6d167f1a66f985f7504b5a7c278e8e7bdcd4f43395fff56b962aedaab984e034\" returns successfully" Jan 23 00:07:14.706088 kubelet[3616]: I0123 00:07:14.705989 3616 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-65cdcdfd6d-xzvjm" podStartSLOduration=2.916637139 podStartE2EDuration="4.705954322s" podCreationTimestamp="2026-01-23 00:07:10 +0000 UTC" firstStartedPulling="2026-01-23 00:07:11.10972435 +0000 UTC m=+3.735961669" lastFinishedPulling="2026-01-23 00:07:12.899041557 +0000 UTC m=+5.525278852" observedRunningTime="2026-01-23 00:07:13.758618868 +0000 UTC m=+6.384856187" watchObservedRunningTime="2026-01-23 00:07:14.705954322 +0000 UTC m=+7.332191617" Jan 23 00:07:21.931151 sudo[2369]: pam_unix(sudo:session): session closed for user root Jan 23 00:07:22.013061 sshd[2368]: Connection closed by 4.153.228.146 port 33082 Jan 23 00:07:22.015871 sshd-session[2365]: pam_unix(sshd:session): session closed for user core Jan 23 00:07:22.025083 systemd[1]: sshd@6-172.31.18.130:22-4.153.228.146:33082.service: Deactivated successfully. Jan 23 00:07:22.034309 systemd[1]: session-7.scope: Deactivated successfully. Jan 23 00:07:22.034902 systemd[1]: session-7.scope: Consumed 12.016s CPU time, 225.3M memory peak. Jan 23 00:07:22.039904 systemd-logind[1983]: Session 7 logged out. Waiting for processes to exit. Jan 23 00:07:22.046586 systemd-logind[1983]: Removed session 7. Jan 23 00:07:37.923388 systemd[1]: Created slice kubepods-besteffort-pod97c6aac9_e0b5_4959_bab1_11c931cc5f10.slice - libcontainer container kubepods-besteffort-pod97c6aac9_e0b5_4959_bab1_11c931cc5f10.slice. Jan 23 00:07:38.011550 kubelet[3616]: I0123 00:07:38.010708 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9rscd\" (UniqueName: \"kubernetes.io/projected/97c6aac9-e0b5-4959-bab1-11c931cc5f10-kube-api-access-9rscd\") pod \"calico-typha-75c6644f97-blv44\" (UID: \"97c6aac9-e0b5-4959-bab1-11c931cc5f10\") " pod="calico-system/calico-typha-75c6644f97-blv44" Jan 23 00:07:38.011550 kubelet[3616]: I0123 00:07:38.010803 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/97c6aac9-e0b5-4959-bab1-11c931cc5f10-tigera-ca-bundle\") pod \"calico-typha-75c6644f97-blv44\" (UID: \"97c6aac9-e0b5-4959-bab1-11c931cc5f10\") " pod="calico-system/calico-typha-75c6644f97-blv44" Jan 23 00:07:38.011550 kubelet[3616]: I0123 00:07:38.010851 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/97c6aac9-e0b5-4959-bab1-11c931cc5f10-typha-certs\") pod \"calico-typha-75c6644f97-blv44\" (UID: \"97c6aac9-e0b5-4959-bab1-11c931cc5f10\") " pod="calico-system/calico-typha-75c6644f97-blv44" Jan 23 00:07:38.196285 systemd[1]: Created slice kubepods-besteffort-podf3e9f159_6e0e_41c7_b138_2e84b8f68ed7.slice - libcontainer container kubepods-besteffort-podf3e9f159_6e0e_41c7_b138_2e84b8f68ed7.slice. Jan 23 00:07:38.214689 kubelet[3616]: I0123 00:07:38.214613 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f3e9f159-6e0e-41c7-b138-2e84b8f68ed7-policysync\") pod \"calico-node-zxwhw\" (UID: \"f3e9f159-6e0e-41c7-b138-2e84b8f68ed7\") " pod="calico-system/calico-node-zxwhw" Jan 23 00:07:38.214689 kubelet[3616]: I0123 00:07:38.214689 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f3e9f159-6e0e-41c7-b138-2e84b8f68ed7-cni-net-dir\") pod \"calico-node-zxwhw\" (UID: \"f3e9f159-6e0e-41c7-b138-2e84b8f68ed7\") " pod="calico-system/calico-node-zxwhw" Jan 23 00:07:38.214922 kubelet[3616]: I0123 00:07:38.214725 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f3e9f159-6e0e-41c7-b138-2e84b8f68ed7-tigera-ca-bundle\") pod \"calico-node-zxwhw\" (UID: \"f3e9f159-6e0e-41c7-b138-2e84b8f68ed7\") " pod="calico-system/calico-node-zxwhw" Jan 23 00:07:38.214922 kubelet[3616]: I0123 00:07:38.214759 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f3e9f159-6e0e-41c7-b138-2e84b8f68ed7-xtables-lock\") pod \"calico-node-zxwhw\" (UID: \"f3e9f159-6e0e-41c7-b138-2e84b8f68ed7\") " pod="calico-system/calico-node-zxwhw" Jan 23 00:07:38.214922 kubelet[3616]: I0123 00:07:38.214795 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f3e9f159-6e0e-41c7-b138-2e84b8f68ed7-cni-bin-dir\") pod \"calico-node-zxwhw\" (UID: \"f3e9f159-6e0e-41c7-b138-2e84b8f68ed7\") " pod="calico-system/calico-node-zxwhw" Jan 23 00:07:38.214922 kubelet[3616]: I0123 00:07:38.214841 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f3e9f159-6e0e-41c7-b138-2e84b8f68ed7-flexvol-driver-host\") pod \"calico-node-zxwhw\" (UID: \"f3e9f159-6e0e-41c7-b138-2e84b8f68ed7\") " pod="calico-system/calico-node-zxwhw" Jan 23 00:07:38.214922 kubelet[3616]: I0123 00:07:38.214877 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f3e9f159-6e0e-41c7-b138-2e84b8f68ed7-var-lib-calico\") pod \"calico-node-zxwhw\" (UID: \"f3e9f159-6e0e-41c7-b138-2e84b8f68ed7\") " pod="calico-system/calico-node-zxwhw" Jan 23 00:07:38.215165 kubelet[3616]: I0123 00:07:38.214909 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f3e9f159-6e0e-41c7-b138-2e84b8f68ed7-var-run-calico\") pod \"calico-node-zxwhw\" (UID: \"f3e9f159-6e0e-41c7-b138-2e84b8f68ed7\") " pod="calico-system/calico-node-zxwhw" Jan 23 00:07:38.215165 kubelet[3616]: I0123 00:07:38.214942 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f3e9f159-6e0e-41c7-b138-2e84b8f68ed7-cni-log-dir\") pod \"calico-node-zxwhw\" (UID: \"f3e9f159-6e0e-41c7-b138-2e84b8f68ed7\") " pod="calico-system/calico-node-zxwhw" Jan 23 00:07:38.215165 kubelet[3616]: I0123 00:07:38.214974 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f3e9f159-6e0e-41c7-b138-2e84b8f68ed7-lib-modules\") pod \"calico-node-zxwhw\" (UID: \"f3e9f159-6e0e-41c7-b138-2e84b8f68ed7\") " pod="calico-system/calico-node-zxwhw" Jan 23 00:07:38.215165 kubelet[3616]: I0123 00:07:38.215013 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f3e9f159-6e0e-41c7-b138-2e84b8f68ed7-node-certs\") pod \"calico-node-zxwhw\" (UID: \"f3e9f159-6e0e-41c7-b138-2e84b8f68ed7\") " pod="calico-system/calico-node-zxwhw" Jan 23 00:07:38.215165 kubelet[3616]: I0123 00:07:38.215051 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lxwq4\" (UniqueName: \"kubernetes.io/projected/f3e9f159-6e0e-41c7-b138-2e84b8f68ed7-kube-api-access-lxwq4\") pod \"calico-node-zxwhw\" (UID: \"f3e9f159-6e0e-41c7-b138-2e84b8f68ed7\") " pod="calico-system/calico-node-zxwhw" Jan 23 00:07:38.243845 containerd[2014]: time="2026-01-23T00:07:38.243779128Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75c6644f97-blv44,Uid:97c6aac9-e0b5-4959-bab1-11c931cc5f10,Namespace:calico-system,Attempt:0,}" Jan 23 00:07:38.298637 containerd[2014]: time="2026-01-23T00:07:38.298202479Z" level=info msg="connecting to shim 047457cf11b0be526a3eef5f665f15c959c95c5474e91bbb124cb6bb0f068898" address="unix:///run/containerd/s/f5dde45ef023a400fba544055ff3bddc316a9107dc7d597d723a8a21fd387c08" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:07:38.323369 kubelet[3616]: E0123 00:07:38.322961 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.323369 kubelet[3616]: W0123 00:07:38.323009 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.323369 kubelet[3616]: E0123 00:07:38.323048 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.327192 kubelet[3616]: E0123 00:07:38.326642 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.327192 kubelet[3616]: W0123 00:07:38.326685 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.327192 kubelet[3616]: E0123 00:07:38.326721 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.330673 kubelet[3616]: E0123 00:07:38.330620 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.330673 kubelet[3616]: W0123 00:07:38.330660 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.330673 kubelet[3616]: E0123 00:07:38.330696 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.334023 kubelet[3616]: E0123 00:07:38.331220 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.334023 kubelet[3616]: W0123 00:07:38.331246 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.334023 kubelet[3616]: E0123 00:07:38.331273 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.334023 kubelet[3616]: E0123 00:07:38.331743 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.334023 kubelet[3616]: W0123 00:07:38.331764 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.334023 kubelet[3616]: E0123 00:07:38.331791 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.334023 kubelet[3616]: E0123 00:07:38.332767 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.334023 kubelet[3616]: W0123 00:07:38.332795 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.334023 kubelet[3616]: E0123 00:07:38.332826 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.336618 kubelet[3616]: E0123 00:07:38.335334 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.336618 kubelet[3616]: W0123 00:07:38.335364 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.336618 kubelet[3616]: E0123 00:07:38.335421 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.337725 kubelet[3616]: E0123 00:07:38.337597 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.337725 kubelet[3616]: W0123 00:07:38.337627 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.337725 kubelet[3616]: E0123 00:07:38.337660 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.339619 kubelet[3616]: E0123 00:07:38.339253 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.339619 kubelet[3616]: W0123 00:07:38.339297 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.339619 kubelet[3616]: E0123 00:07:38.339332 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.342074 kubelet[3616]: E0123 00:07:38.341779 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.342785 kubelet[3616]: W0123 00:07:38.342142 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.342785 kubelet[3616]: E0123 00:07:38.342188 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.345059 kubelet[3616]: E0123 00:07:38.345007 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.345059 kubelet[3616]: W0123 00:07:38.345048 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.345273 kubelet[3616]: E0123 00:07:38.345084 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.347443 kubelet[3616]: E0123 00:07:38.347387 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.347443 kubelet[3616]: W0123 00:07:38.347428 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.348727 kubelet[3616]: E0123 00:07:38.347651 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.348727 kubelet[3616]: E0123 00:07:38.348562 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.348727 kubelet[3616]: W0123 00:07:38.348589 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.348727 kubelet[3616]: E0123 00:07:38.348620 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.351926 kubelet[3616]: E0123 00:07:38.351873 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.351926 kubelet[3616]: W0123 00:07:38.351913 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.352115 kubelet[3616]: E0123 00:07:38.351951 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.353166 kubelet[3616]: E0123 00:07:38.353030 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.353166 kubelet[3616]: W0123 00:07:38.353072 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.353166 kubelet[3616]: E0123 00:07:38.353105 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.354292 kubelet[3616]: E0123 00:07:38.353486 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.354292 kubelet[3616]: W0123 00:07:38.353536 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.354292 kubelet[3616]: E0123 00:07:38.353560 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.354292 kubelet[3616]: E0123 00:07:38.354586 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.354292 kubelet[3616]: W0123 00:07:38.354614 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.354292 kubelet[3616]: E0123 00:07:38.354643 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.356330 kubelet[3616]: E0123 00:07:38.355627 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.356330 kubelet[3616]: W0123 00:07:38.355653 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.356330 kubelet[3616]: E0123 00:07:38.355683 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.357201 kubelet[3616]: E0123 00:07:38.356796 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d5rlp" podUID="73991cb4-51f1-4920-a4d2-a782912c4922" Jan 23 00:07:38.359087 kubelet[3616]: E0123 00:07:38.358963 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.359087 kubelet[3616]: W0123 00:07:38.359005 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.359087 kubelet[3616]: E0123 00:07:38.359040 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.359544 kubelet[3616]: E0123 00:07:38.359454 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.359544 kubelet[3616]: W0123 00:07:38.359487 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.359544 kubelet[3616]: E0123 00:07:38.359542 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.360671 kubelet[3616]: E0123 00:07:38.360620 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.360671 kubelet[3616]: W0123 00:07:38.360661 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.360810 kubelet[3616]: E0123 00:07:38.360694 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.361248 kubelet[3616]: E0123 00:07:38.361186 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.361248 kubelet[3616]: W0123 00:07:38.361221 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.361248 kubelet[3616]: E0123 00:07:38.361247 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.362733 kubelet[3616]: E0123 00:07:38.362680 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.362733 kubelet[3616]: W0123 00:07:38.362720 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.362929 kubelet[3616]: E0123 00:07:38.362755 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.365002 kubelet[3616]: E0123 00:07:38.364905 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.365002 kubelet[3616]: W0123 00:07:38.364946 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.365189 kubelet[3616]: E0123 00:07:38.365018 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.365752 kubelet[3616]: E0123 00:07:38.365466 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.365752 kubelet[3616]: W0123 00:07:38.365515 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.365752 kubelet[3616]: E0123 00:07:38.365549 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.366084 kubelet[3616]: E0123 00:07:38.366062 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.366145 kubelet[3616]: W0123 00:07:38.366082 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.366145 kubelet[3616]: E0123 00:07:38.366107 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.367035 kubelet[3616]: E0123 00:07:38.366793 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.367035 kubelet[3616]: W0123 00:07:38.366831 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.367035 kubelet[3616]: E0123 00:07:38.366864 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.369207 kubelet[3616]: E0123 00:07:38.369144 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.369207 kubelet[3616]: W0123 00:07:38.369183 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.369898 kubelet[3616]: E0123 00:07:38.369216 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.371410 kubelet[3616]: E0123 00:07:38.371333 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.371410 kubelet[3616]: W0123 00:07:38.371379 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.371410 kubelet[3616]: E0123 00:07:38.371415 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.373782 kubelet[3616]: E0123 00:07:38.373107 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.373782 kubelet[3616]: W0123 00:07:38.373197 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.373782 kubelet[3616]: E0123 00:07:38.373246 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.376037 kubelet[3616]: E0123 00:07:38.375922 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.376037 kubelet[3616]: W0123 00:07:38.375961 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.376037 kubelet[3616]: E0123 00:07:38.375997 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.377061 kubelet[3616]: E0123 00:07:38.376909 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.377061 kubelet[3616]: W0123 00:07:38.376938 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.377061 kubelet[3616]: E0123 00:07:38.376981 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.378031 kubelet[3616]: E0123 00:07:38.377630 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.378031 kubelet[3616]: W0123 00:07:38.377662 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.378031 kubelet[3616]: E0123 00:07:38.377692 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.406842 kubelet[3616]: E0123 00:07:38.403482 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.406842 kubelet[3616]: W0123 00:07:38.403935 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.406842 kubelet[3616]: E0123 00:07:38.404079 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.406842 kubelet[3616]: E0123 00:07:38.405105 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.406842 kubelet[3616]: W0123 00:07:38.405130 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.406842 kubelet[3616]: E0123 00:07:38.405200 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.408846 kubelet[3616]: E0123 00:07:38.408794 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.408846 kubelet[3616]: W0123 00:07:38.408834 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.409007 kubelet[3616]: E0123 00:07:38.408870 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.412530 kubelet[3616]: E0123 00:07:38.410625 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.412530 kubelet[3616]: W0123 00:07:38.410665 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.412530 kubelet[3616]: E0123 00:07:38.410699 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.412530 kubelet[3616]: E0123 00:07:38.411410 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.412530 kubelet[3616]: W0123 00:07:38.411434 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.412530 kubelet[3616]: E0123 00:07:38.411543 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.412530 kubelet[3616]: E0123 00:07:38.412417 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.412530 kubelet[3616]: W0123 00:07:38.412447 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.412530 kubelet[3616]: E0123 00:07:38.412528 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.414084 kubelet[3616]: E0123 00:07:38.413574 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.414084 kubelet[3616]: W0123 00:07:38.413638 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.414084 kubelet[3616]: E0123 00:07:38.413670 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.415182 kubelet[3616]: E0123 00:07:38.415051 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.415331 kubelet[3616]: W0123 00:07:38.415090 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.415331 kubelet[3616]: E0123 00:07:38.415244 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.416428 kubelet[3616]: E0123 00:07:38.416371 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.417537 kubelet[3616]: W0123 00:07:38.416413 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.417537 kubelet[3616]: E0123 00:07:38.417107 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.420552 kubelet[3616]: E0123 00:07:38.418806 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.420552 kubelet[3616]: W0123 00:07:38.418852 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.420552 kubelet[3616]: E0123 00:07:38.418888 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.421058 kubelet[3616]: E0123 00:07:38.421013 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.421230 kubelet[3616]: W0123 00:07:38.421051 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.421576 kubelet[3616]: E0123 00:07:38.421312 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.423705 kubelet[3616]: E0123 00:07:38.422980 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.423705 kubelet[3616]: W0123 00:07:38.423035 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.423705 kubelet[3616]: E0123 00:07:38.423070 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.423935 kubelet[3616]: E0123 00:07:38.423759 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.423935 kubelet[3616]: W0123 00:07:38.423796 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.423935 kubelet[3616]: E0123 00:07:38.423827 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.426093 kubelet[3616]: E0123 00:07:38.425944 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.426093 kubelet[3616]: W0123 00:07:38.426052 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.426297 kubelet[3616]: E0123 00:07:38.426124 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.427533 kubelet[3616]: E0123 00:07:38.426935 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.427533 kubelet[3616]: W0123 00:07:38.426971 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.427533 kubelet[3616]: E0123 00:07:38.427019 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.427815 kubelet[3616]: E0123 00:07:38.427697 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.427815 kubelet[3616]: W0123 00:07:38.427722 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.427815 kubelet[3616]: E0123 00:07:38.427761 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.429139 kubelet[3616]: E0123 00:07:38.428975 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.429139 kubelet[3616]: W0123 00:07:38.429015 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.429139 kubelet[3616]: E0123 00:07:38.429094 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.433586 kubelet[3616]: E0123 00:07:38.433373 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.433586 kubelet[3616]: W0123 00:07:38.433425 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.433586 kubelet[3616]: E0123 00:07:38.433477 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.448098 kubelet[3616]: E0123 00:07:38.447946 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.448098 kubelet[3616]: W0123 00:07:38.447986 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.448098 kubelet[3616]: E0123 00:07:38.448035 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.460767 kubelet[3616]: E0123 00:07:38.460700 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.460767 kubelet[3616]: W0123 00:07:38.460759 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.460950 kubelet[3616]: E0123 00:07:38.460797 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.463835 kubelet[3616]: E0123 00:07:38.462830 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.463835 kubelet[3616]: W0123 00:07:38.462874 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.463835 kubelet[3616]: E0123 00:07:38.462910 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.464790 systemd[1]: Started cri-containerd-047457cf11b0be526a3eef5f665f15c959c95c5474e91bbb124cb6bb0f068898.scope - libcontainer container 047457cf11b0be526a3eef5f665f15c959c95c5474e91bbb124cb6bb0f068898. Jan 23 00:07:38.470247 kubelet[3616]: E0123 00:07:38.470193 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.470247 kubelet[3616]: W0123 00:07:38.470235 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.470432 kubelet[3616]: E0123 00:07:38.470286 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.471238 kubelet[3616]: E0123 00:07:38.471193 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.471478 kubelet[3616]: W0123 00:07:38.471230 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.471478 kubelet[3616]: E0123 00:07:38.471280 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.474226 kubelet[3616]: E0123 00:07:38.474156 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.474226 kubelet[3616]: W0123 00:07:38.474215 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.474226 kubelet[3616]: E0123 00:07:38.474250 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.475243 kubelet[3616]: I0123 00:07:38.474296 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g48bw\" (UniqueName: \"kubernetes.io/projected/73991cb4-51f1-4920-a4d2-a782912c4922-kube-api-access-g48bw\") pod \"csi-node-driver-d5rlp\" (UID: \"73991cb4-51f1-4920-a4d2-a782912c4922\") " pod="calico-system/csi-node-driver-d5rlp" Jan 23 00:07:38.478642 kubelet[3616]: E0123 00:07:38.478575 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.478642 kubelet[3616]: W0123 00:07:38.478616 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.478830 kubelet[3616]: E0123 00:07:38.478653 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.478830 kubelet[3616]: I0123 00:07:38.478699 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/73991cb4-51f1-4920-a4d2-a782912c4922-varrun\") pod \"csi-node-driver-d5rlp\" (UID: \"73991cb4-51f1-4920-a4d2-a782912c4922\") " pod="calico-system/csi-node-driver-d5rlp" Jan 23 00:07:38.480394 kubelet[3616]: E0123 00:07:38.480336 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.484888 kubelet[3616]: W0123 00:07:38.484805 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.485016 kubelet[3616]: E0123 00:07:38.484894 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.485069 kubelet[3616]: I0123 00:07:38.485048 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/73991cb4-51f1-4920-a4d2-a782912c4922-kubelet-dir\") pod \"csi-node-driver-d5rlp\" (UID: \"73991cb4-51f1-4920-a4d2-a782912c4922\") " pod="calico-system/csi-node-driver-d5rlp" Jan 23 00:07:38.485795 kubelet[3616]: E0123 00:07:38.485745 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.485795 kubelet[3616]: W0123 00:07:38.485786 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.486009 kubelet[3616]: E0123 00:07:38.485819 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.486009 kubelet[3616]: I0123 00:07:38.485858 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/73991cb4-51f1-4920-a4d2-a782912c4922-registration-dir\") pod \"csi-node-driver-d5rlp\" (UID: \"73991cb4-51f1-4920-a4d2-a782912c4922\") " pod="calico-system/csi-node-driver-d5rlp" Jan 23 00:07:38.487577 kubelet[3616]: E0123 00:07:38.486767 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.488301 kubelet[3616]: W0123 00:07:38.487598 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.488301 kubelet[3616]: E0123 00:07:38.487907 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.488301 kubelet[3616]: I0123 00:07:38.488092 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/73991cb4-51f1-4920-a4d2-a782912c4922-socket-dir\") pod \"csi-node-driver-d5rlp\" (UID: \"73991cb4-51f1-4920-a4d2-a782912c4922\") " pod="calico-system/csi-node-driver-d5rlp" Jan 23 00:07:38.489482 kubelet[3616]: E0123 00:07:38.489427 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.489669 kubelet[3616]: W0123 00:07:38.489472 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.489669 kubelet[3616]: E0123 00:07:38.489643 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.491151 kubelet[3616]: E0123 00:07:38.491102 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.491151 kubelet[3616]: W0123 00:07:38.491141 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.491359 kubelet[3616]: E0123 00:07:38.491174 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.492601 kubelet[3616]: E0123 00:07:38.492479 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.492601 kubelet[3616]: W0123 00:07:38.492591 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.492796 kubelet[3616]: E0123 00:07:38.492623 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.493059 kubelet[3616]: E0123 00:07:38.493018 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.493059 kubelet[3616]: W0123 00:07:38.493049 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.493253 kubelet[3616]: E0123 00:07:38.493073 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.493502 kubelet[3616]: E0123 00:07:38.493450 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.493691 kubelet[3616]: W0123 00:07:38.493483 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.493691 kubelet[3616]: E0123 00:07:38.493535 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.493950 kubelet[3616]: E0123 00:07:38.493902 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.493950 kubelet[3616]: W0123 00:07:38.493931 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.494568 kubelet[3616]: E0123 00:07:38.493954 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.494568 kubelet[3616]: E0123 00:07:38.494371 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.494568 kubelet[3616]: W0123 00:07:38.494389 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.494568 kubelet[3616]: E0123 00:07:38.494411 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.494930 kubelet[3616]: E0123 00:07:38.494854 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.494930 kubelet[3616]: W0123 00:07:38.494872 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.494930 kubelet[3616]: E0123 00:07:38.494894 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.496342 kubelet[3616]: E0123 00:07:38.496292 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.496342 kubelet[3616]: W0123 00:07:38.496331 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.496484 kubelet[3616]: E0123 00:07:38.496382 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.499039 kubelet[3616]: E0123 00:07:38.498982 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.499039 kubelet[3616]: W0123 00:07:38.499026 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.499248 kubelet[3616]: E0123 00:07:38.499075 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.510881 containerd[2014]: time="2026-01-23T00:07:38.510809563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zxwhw,Uid:f3e9f159-6e0e-41c7-b138-2e84b8f68ed7,Namespace:calico-system,Attempt:0,}" Jan 23 00:07:38.555954 containerd[2014]: time="2026-01-23T00:07:38.555875794Z" level=info msg="connecting to shim aaa6666247f8792d93a6df8af1bf879f90b4730fe2ccafa38e2a9e94b56241af" address="unix:///run/containerd/s/e9e957c1bdf455fcf0b81bf2f527da7904927a065b02c44ff48a9ac2c2464140" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:07:38.589151 kubelet[3616]: E0123 00:07:38.589072 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.589151 kubelet[3616]: W0123 00:07:38.589110 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.589151 kubelet[3616]: E0123 00:07:38.589142 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.589686 kubelet[3616]: E0123 00:07:38.589641 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.589686 kubelet[3616]: W0123 00:07:38.589678 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.589803 kubelet[3616]: E0123 00:07:38.589705 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.590978 kubelet[3616]: E0123 00:07:38.590058 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.590978 kubelet[3616]: W0123 00:07:38.590086 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.590978 kubelet[3616]: E0123 00:07:38.590113 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.590978 kubelet[3616]: E0123 00:07:38.590459 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.590978 kubelet[3616]: W0123 00:07:38.590476 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.590978 kubelet[3616]: E0123 00:07:38.590528 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.590978 kubelet[3616]: E0123 00:07:38.590918 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.590978 kubelet[3616]: W0123 00:07:38.590935 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.590978 kubelet[3616]: E0123 00:07:38.590959 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.592064 kubelet[3616]: E0123 00:07:38.591821 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.592064 kubelet[3616]: W0123 00:07:38.591860 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.592064 kubelet[3616]: E0123 00:07:38.591892 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.592636 kubelet[3616]: E0123 00:07:38.592315 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.592636 kubelet[3616]: W0123 00:07:38.592346 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.592636 kubelet[3616]: E0123 00:07:38.592370 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.593678 kubelet[3616]: E0123 00:07:38.592798 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.593678 kubelet[3616]: W0123 00:07:38.592818 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.593678 kubelet[3616]: E0123 00:07:38.592843 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.593856 kubelet[3616]: E0123 00:07:38.593703 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.593856 kubelet[3616]: W0123 00:07:38.593727 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.593856 kubelet[3616]: E0123 00:07:38.593785 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.595012 kubelet[3616]: E0123 00:07:38.594292 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.595012 kubelet[3616]: W0123 00:07:38.594328 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.595012 kubelet[3616]: E0123 00:07:38.594355 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.595012 kubelet[3616]: E0123 00:07:38.594978 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.595012 kubelet[3616]: W0123 00:07:38.595001 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.595333 kubelet[3616]: E0123 00:07:38.595058 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.595659 kubelet[3616]: E0123 00:07:38.595617 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.595659 kubelet[3616]: W0123 00:07:38.595649 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.596019 kubelet[3616]: E0123 00:07:38.595676 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.597528 kubelet[3616]: E0123 00:07:38.596469 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.597528 kubelet[3616]: W0123 00:07:38.596577 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.597528 kubelet[3616]: E0123 00:07:38.596634 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.597528 kubelet[3616]: E0123 00:07:38.597251 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.597528 kubelet[3616]: W0123 00:07:38.597273 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.597528 kubelet[3616]: E0123 00:07:38.597300 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.597899 kubelet[3616]: E0123 00:07:38.597876 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.597951 kubelet[3616]: W0123 00:07:38.597897 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.597951 kubelet[3616]: E0123 00:07:38.597919 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.600015 kubelet[3616]: E0123 00:07:38.599201 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.600015 kubelet[3616]: W0123 00:07:38.599433 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.600015 kubelet[3616]: E0123 00:07:38.599469 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.600548 kubelet[3616]: E0123 00:07:38.600395 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.600548 kubelet[3616]: W0123 00:07:38.600431 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.600548 kubelet[3616]: E0123 00:07:38.600464 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.601593 kubelet[3616]: E0123 00:07:38.601541 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.601593 kubelet[3616]: W0123 00:07:38.601583 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.601758 kubelet[3616]: E0123 00:07:38.601618 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.602619 kubelet[3616]: E0123 00:07:38.602058 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.602619 kubelet[3616]: W0123 00:07:38.602090 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.602619 kubelet[3616]: E0123 00:07:38.602117 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.602619 kubelet[3616]: E0123 00:07:38.602571 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.602619 kubelet[3616]: W0123 00:07:38.602593 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.602619 kubelet[3616]: E0123 00:07:38.602617 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.603390 kubelet[3616]: E0123 00:07:38.602970 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.603390 kubelet[3616]: W0123 00:07:38.603007 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.603390 kubelet[3616]: E0123 00:07:38.603030 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.605731 kubelet[3616]: E0123 00:07:38.603446 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.605731 kubelet[3616]: W0123 00:07:38.603464 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.605731 kubelet[3616]: E0123 00:07:38.603516 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.605731 kubelet[3616]: E0123 00:07:38.603922 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.605731 kubelet[3616]: W0123 00:07:38.603940 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.605731 kubelet[3616]: E0123 00:07:38.603961 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.605731 kubelet[3616]: E0123 00:07:38.604556 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.605731 kubelet[3616]: W0123 00:07:38.604579 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.605731 kubelet[3616]: E0123 00:07:38.604605 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.605731 kubelet[3616]: E0123 00:07:38.605051 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.606196 kubelet[3616]: W0123 00:07:38.605068 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.606196 kubelet[3616]: E0123 00:07:38.605089 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.638166 kubelet[3616]: E0123 00:07:38.638131 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:38.638608 kubelet[3616]: W0123 00:07:38.638480 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:38.638608 kubelet[3616]: E0123 00:07:38.638548 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:38.649830 systemd[1]: Started cri-containerd-aaa6666247f8792d93a6df8af1bf879f90b4730fe2ccafa38e2a9e94b56241af.scope - libcontainer container aaa6666247f8792d93a6df8af1bf879f90b4730fe2ccafa38e2a9e94b56241af. Jan 23 00:07:38.722356 containerd[2014]: time="2026-01-23T00:07:38.721787490Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-75c6644f97-blv44,Uid:97c6aac9-e0b5-4959-bab1-11c931cc5f10,Namespace:calico-system,Attempt:0,} returns sandbox id \"047457cf11b0be526a3eef5f665f15c959c95c5474e91bbb124cb6bb0f068898\"" Jan 23 00:07:38.731427 containerd[2014]: time="2026-01-23T00:07:38.731205130Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 23 00:07:38.759827 containerd[2014]: time="2026-01-23T00:07:38.759667900Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-zxwhw,Uid:f3e9f159-6e0e-41c7-b138-2e84b8f68ed7,Namespace:calico-system,Attempt:0,} returns sandbox id \"aaa6666247f8792d93a6df8af1bf879f90b4730fe2ccafa38e2a9e94b56241af\"" Jan 23 00:07:39.868438 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3457902122.mount: Deactivated successfully. Jan 23 00:07:40.630688 kubelet[3616]: E0123 00:07:40.630551 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d5rlp" podUID="73991cb4-51f1-4920-a4d2-a782912c4922" Jan 23 00:07:40.727125 containerd[2014]: time="2026-01-23T00:07:40.727036925Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:40.728869 containerd[2014]: time="2026-01-23T00:07:40.728808883Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=33090687" Jan 23 00:07:40.730949 containerd[2014]: time="2026-01-23T00:07:40.730269284Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:40.737998 containerd[2014]: time="2026-01-23T00:07:40.737725127Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:40.740783 containerd[2014]: time="2026-01-23T00:07:40.739793168Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.008074107s" Jan 23 00:07:40.740783 containerd[2014]: time="2026-01-23T00:07:40.739851483Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 23 00:07:40.744667 containerd[2014]: time="2026-01-23T00:07:40.744044717Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 23 00:07:40.783323 containerd[2014]: time="2026-01-23T00:07:40.783275940Z" level=info msg="CreateContainer within sandbox \"047457cf11b0be526a3eef5f665f15c959c95c5474e91bbb124cb6bb0f068898\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 23 00:07:40.807055 containerd[2014]: time="2026-01-23T00:07:40.805839892Z" level=info msg="Container c2e21afe1cd79fecacd69a5dfe854495fbc25e3c95306a30649a4b66c8665e01: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:07:40.807910 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3572234944.mount: Deactivated successfully. Jan 23 00:07:40.823250 containerd[2014]: time="2026-01-23T00:07:40.823173453Z" level=info msg="CreateContainer within sandbox \"047457cf11b0be526a3eef5f665f15c959c95c5474e91bbb124cb6bb0f068898\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"c2e21afe1cd79fecacd69a5dfe854495fbc25e3c95306a30649a4b66c8665e01\"" Jan 23 00:07:40.824613 containerd[2014]: time="2026-01-23T00:07:40.824466970Z" level=info msg="StartContainer for \"c2e21afe1cd79fecacd69a5dfe854495fbc25e3c95306a30649a4b66c8665e01\"" Jan 23 00:07:40.834894 containerd[2014]: time="2026-01-23T00:07:40.834652947Z" level=info msg="connecting to shim c2e21afe1cd79fecacd69a5dfe854495fbc25e3c95306a30649a4b66c8665e01" address="unix:///run/containerd/s/f5dde45ef023a400fba544055ff3bddc316a9107dc7d597d723a8a21fd387c08" protocol=ttrpc version=3 Jan 23 00:07:40.883792 systemd[1]: Started cri-containerd-c2e21afe1cd79fecacd69a5dfe854495fbc25e3c95306a30649a4b66c8665e01.scope - libcontainer container c2e21afe1cd79fecacd69a5dfe854495fbc25e3c95306a30649a4b66c8665e01. Jan 23 00:07:40.974536 containerd[2014]: time="2026-01-23T00:07:40.974359515Z" level=info msg="StartContainer for \"c2e21afe1cd79fecacd69a5dfe854495fbc25e3c95306a30649a4b66c8665e01\" returns successfully" Jan 23 00:07:41.895702 kubelet[3616]: E0123 00:07:41.895462 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:41.895702 kubelet[3616]: W0123 00:07:41.895617 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:41.895702 kubelet[3616]: E0123 00:07:41.895658 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:41.896479 kubelet[3616]: E0123 00:07:41.896107 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:41.896479 kubelet[3616]: W0123 00:07:41.896127 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:41.896479 kubelet[3616]: E0123 00:07:41.896151 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:41.896693 kubelet[3616]: E0123 00:07:41.896586 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:41.896693 kubelet[3616]: W0123 00:07:41.896632 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:41.896693 kubelet[3616]: E0123 00:07:41.896657 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:41.897741 kubelet[3616]: E0123 00:07:41.897203 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:41.897741 kubelet[3616]: W0123 00:07:41.897236 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:41.897741 kubelet[3616]: E0123 00:07:41.897293 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:41.897960 kubelet[3616]: E0123 00:07:41.897907 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:41.897960 kubelet[3616]: W0123 00:07:41.897928 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:41.898076 kubelet[3616]: E0123 00:07:41.897977 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:41.900693 kubelet[3616]: E0123 00:07:41.900580 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:41.900693 kubelet[3616]: W0123 00:07:41.900621 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:41.900693 kubelet[3616]: E0123 00:07:41.900655 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:41.903844 kubelet[3616]: E0123 00:07:41.903693 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:41.903844 kubelet[3616]: W0123 00:07:41.903734 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:41.903844 kubelet[3616]: E0123 00:07:41.903770 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:41.906666 kubelet[3616]: E0123 00:07:41.906413 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:41.906819 kubelet[3616]: W0123 00:07:41.906740 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:41.907528 kubelet[3616]: E0123 00:07:41.906786 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:41.910150 kubelet[3616]: E0123 00:07:41.909179 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:41.910150 kubelet[3616]: W0123 00:07:41.909218 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:41.910150 kubelet[3616]: E0123 00:07:41.909282 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:41.910150 kubelet[3616]: E0123 00:07:41.909838 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:41.910150 kubelet[3616]: W0123 00:07:41.909897 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:41.910150 kubelet[3616]: E0123 00:07:41.909932 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:41.912304 kubelet[3616]: E0123 00:07:41.911986 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:41.912304 kubelet[3616]: W0123 00:07:41.912026 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:41.912304 kubelet[3616]: E0123 00:07:41.912058 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:41.913454 kubelet[3616]: E0123 00:07:41.912838 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:41.913454 kubelet[3616]: W0123 00:07:41.912873 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:41.913454 kubelet[3616]: E0123 00:07:41.912935 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:41.913454 kubelet[3616]: I0123 00:07:41.913105 3616 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-75c6644f97-blv44" podStartSLOduration=2.8998958999999997 podStartE2EDuration="4.913088662s" podCreationTimestamp="2026-01-23 00:07:37 +0000 UTC" firstStartedPulling="2026-01-23 00:07:38.730269598 +0000 UTC m=+31.356506893" lastFinishedPulling="2026-01-23 00:07:40.743462348 +0000 UTC m=+33.369699655" observedRunningTime="2026-01-23 00:07:41.905174637 +0000 UTC m=+34.531411956" watchObservedRunningTime="2026-01-23 00:07:41.913088662 +0000 UTC m=+34.539325945" Jan 23 00:07:41.913813 kubelet[3616]: E0123 00:07:41.913487 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:41.913813 kubelet[3616]: W0123 00:07:41.913585 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:41.913813 kubelet[3616]: E0123 00:07:41.913610 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:41.915215 kubelet[3616]: E0123 00:07:41.914045 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:41.915215 kubelet[3616]: W0123 00:07:41.914119 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:41.915215 kubelet[3616]: E0123 00:07:41.914145 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:41.915215 kubelet[3616]: E0123 00:07:41.914746 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:41.915215 kubelet[3616]: W0123 00:07:41.914801 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:41.915215 kubelet[3616]: E0123 00:07:41.914830 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:41.925585 kubelet[3616]: E0123 00:07:41.925470 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:41.925731 kubelet[3616]: W0123 00:07:41.925574 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:41.925731 kubelet[3616]: E0123 00:07:41.925644 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:41.926935 kubelet[3616]: E0123 00:07:41.926872 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:41.926935 kubelet[3616]: W0123 00:07:41.926906 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:41.926935 kubelet[3616]: E0123 00:07:41.926938 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:41.931804 kubelet[3616]: E0123 00:07:41.931741 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:41.931931 kubelet[3616]: W0123 00:07:41.931815 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:41.932086 kubelet[3616]: E0123 00:07:41.931849 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:41.933719 kubelet[3616]: E0123 00:07:41.933677 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:41.933719 kubelet[3616]: W0123 00:07:41.933714 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:41.934004 kubelet[3616]: E0123 00:07:41.933748 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:41.936656 kubelet[3616]: E0123 00:07:41.936619 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:41.937485 kubelet[3616]: W0123 00:07:41.936928 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:41.937485 kubelet[3616]: E0123 00:07:41.937355 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:41.938394 kubelet[3616]: E0123 00:07:41.938301 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:41.938693 kubelet[3616]: W0123 00:07:41.938330 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:41.938693 kubelet[3616]: E0123 00:07:41.938579 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:41.939666 kubelet[3616]: E0123 00:07:41.939571 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:41.939666 kubelet[3616]: W0123 00:07:41.939605 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:41.939666 kubelet[3616]: E0123 00:07:41.939635 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:41.941353 kubelet[3616]: E0123 00:07:41.941316 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:41.942083 kubelet[3616]: W0123 00:07:41.941869 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:41.942083 kubelet[3616]: E0123 00:07:41.941917 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:41.945767 kubelet[3616]: E0123 00:07:41.945611 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:41.946357 kubelet[3616]: W0123 00:07:41.945725 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:41.946357 kubelet[3616]: E0123 00:07:41.945976 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:41.948944 kubelet[3616]: E0123 00:07:41.948070 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:41.949390 kubelet[3616]: W0123 00:07:41.949167 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:41.949390 kubelet[3616]: E0123 00:07:41.949221 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:41.951390 kubelet[3616]: E0123 00:07:41.951354 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:41.952059 kubelet[3616]: W0123 00:07:41.951612 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:41.952059 kubelet[3616]: E0123 00:07:41.951655 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:41.953951 kubelet[3616]: E0123 00:07:41.953764 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:41.953951 kubelet[3616]: W0123 00:07:41.953801 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:41.953951 kubelet[3616]: E0123 00:07:41.953834 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:41.954937 kubelet[3616]: E0123 00:07:41.954659 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:41.954937 kubelet[3616]: W0123 00:07:41.954685 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:41.954937 kubelet[3616]: E0123 00:07:41.954712 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:41.956731 kubelet[3616]: E0123 00:07:41.956692 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:41.958027 kubelet[3616]: W0123 00:07:41.957608 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:41.958027 kubelet[3616]: E0123 00:07:41.957661 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:41.960091 kubelet[3616]: E0123 00:07:41.960037 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:41.960091 kubelet[3616]: W0123 00:07:41.960077 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:41.960302 kubelet[3616]: E0123 00:07:41.960111 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:41.965734 kubelet[3616]: E0123 00:07:41.965654 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:41.965734 kubelet[3616]: W0123 00:07:41.965728 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:41.966289 kubelet[3616]: E0123 00:07:41.965771 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:41.966740 kubelet[3616]: E0123 00:07:41.966684 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:41.968462 kubelet[3616]: W0123 00:07:41.968400 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:41.968462 kubelet[3616]: E0123 00:07:41.968463 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:41.971776 kubelet[3616]: E0123 00:07:41.971723 3616 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 23 00:07:41.971776 kubelet[3616]: W0123 00:07:41.971763 3616 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 23 00:07:41.972049 kubelet[3616]: E0123 00:07:41.971798 3616 plugins.go:697] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 23 00:07:42.008566 containerd[2014]: time="2026-01-23T00:07:42.008469736Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:42.011638 containerd[2014]: time="2026-01-23T00:07:42.011573220Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4266741" Jan 23 00:07:42.013359 containerd[2014]: time="2026-01-23T00:07:42.013293183Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:42.020533 containerd[2014]: time="2026-01-23T00:07:42.019830861Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:42.022531 containerd[2014]: time="2026-01-23T00:07:42.022464587Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.278364662s" Jan 23 00:07:42.022721 containerd[2014]: time="2026-01-23T00:07:42.022688011Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 23 00:07:42.030431 containerd[2014]: time="2026-01-23T00:07:42.030238654Z" level=info msg="CreateContainer within sandbox \"aaa6666247f8792d93a6df8af1bf879f90b4730fe2ccafa38e2a9e94b56241af\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 23 00:07:42.049055 containerd[2014]: time="2026-01-23T00:07:42.048986908Z" level=info msg="Container 7b9f15061527617ddbd64c60fc9ae635b8b51c50053ec280ae102b65d0c897cd: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:07:42.059033 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2768905987.mount: Deactivated successfully. Jan 23 00:07:42.069781 containerd[2014]: time="2026-01-23T00:07:42.069705523Z" level=info msg="CreateContainer within sandbox \"aaa6666247f8792d93a6df8af1bf879f90b4730fe2ccafa38e2a9e94b56241af\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"7b9f15061527617ddbd64c60fc9ae635b8b51c50053ec280ae102b65d0c897cd\"" Jan 23 00:07:42.070826 containerd[2014]: time="2026-01-23T00:07:42.070785079Z" level=info msg="StartContainer for \"7b9f15061527617ddbd64c60fc9ae635b8b51c50053ec280ae102b65d0c897cd\"" Jan 23 00:07:42.074655 containerd[2014]: time="2026-01-23T00:07:42.074541864Z" level=info msg="connecting to shim 7b9f15061527617ddbd64c60fc9ae635b8b51c50053ec280ae102b65d0c897cd" address="unix:///run/containerd/s/e9e957c1bdf455fcf0b81bf2f527da7904927a065b02c44ff48a9ac2c2464140" protocol=ttrpc version=3 Jan 23 00:07:42.123098 systemd[1]: Started cri-containerd-7b9f15061527617ddbd64c60fc9ae635b8b51c50053ec280ae102b65d0c897cd.scope - libcontainer container 7b9f15061527617ddbd64c60fc9ae635b8b51c50053ec280ae102b65d0c897cd. Jan 23 00:07:42.232069 containerd[2014]: time="2026-01-23T00:07:42.231827126Z" level=info msg="StartContainer for \"7b9f15061527617ddbd64c60fc9ae635b8b51c50053ec280ae102b65d0c897cd\" returns successfully" Jan 23 00:07:42.267266 systemd[1]: cri-containerd-7b9f15061527617ddbd64c60fc9ae635b8b51c50053ec280ae102b65d0c897cd.scope: Deactivated successfully. Jan 23 00:07:42.276139 containerd[2014]: time="2026-01-23T00:07:42.276067115Z" level=info msg="received container exit event container_id:\"7b9f15061527617ddbd64c60fc9ae635b8b51c50053ec280ae102b65d0c897cd\" id:\"7b9f15061527617ddbd64c60fc9ae635b8b51c50053ec280ae102b65d0c897cd\" pid:4342 exited_at:{seconds:1769126862 nanos:275480441}" Jan 23 00:07:42.316999 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7b9f15061527617ddbd64c60fc9ae635b8b51c50053ec280ae102b65d0c897cd-rootfs.mount: Deactivated successfully. Jan 23 00:07:42.630694 kubelet[3616]: E0123 00:07:42.630617 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d5rlp" podUID="73991cb4-51f1-4920-a4d2-a782912c4922" Jan 23 00:07:42.884096 containerd[2014]: time="2026-01-23T00:07:42.882980728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 23 00:07:44.630929 kubelet[3616]: E0123 00:07:44.630756 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d5rlp" podUID="73991cb4-51f1-4920-a4d2-a782912c4922" Jan 23 00:07:45.793753 containerd[2014]: time="2026-01-23T00:07:45.793696423Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:45.797209 containerd[2014]: time="2026-01-23T00:07:45.797154653Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65925816" Jan 23 00:07:45.799540 containerd[2014]: time="2026-01-23T00:07:45.799098389Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:45.803897 containerd[2014]: time="2026-01-23T00:07:45.803847389Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:45.805391 containerd[2014]: time="2026-01-23T00:07:45.805331119Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.921007614s" Jan 23 00:07:45.805391 containerd[2014]: time="2026-01-23T00:07:45.805387719Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 23 00:07:45.815529 containerd[2014]: time="2026-01-23T00:07:45.815026913Z" level=info msg="CreateContainer within sandbox \"aaa6666247f8792d93a6df8af1bf879f90b4730fe2ccafa38e2a9e94b56241af\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 23 00:07:45.833830 containerd[2014]: time="2026-01-23T00:07:45.833756491Z" level=info msg="Container 96c231984cc2ac1ada2d22e47f1551e79223d745d8de887f6b4b2d6f98955975: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:07:45.851765 containerd[2014]: time="2026-01-23T00:07:45.851703389Z" level=info msg="CreateContainer within sandbox \"aaa6666247f8792d93a6df8af1bf879f90b4730fe2ccafa38e2a9e94b56241af\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"96c231984cc2ac1ada2d22e47f1551e79223d745d8de887f6b4b2d6f98955975\"" Jan 23 00:07:45.855281 containerd[2014]: time="2026-01-23T00:07:45.853809224Z" level=info msg="StartContainer for \"96c231984cc2ac1ada2d22e47f1551e79223d745d8de887f6b4b2d6f98955975\"" Jan 23 00:07:45.858239 containerd[2014]: time="2026-01-23T00:07:45.858188413Z" level=info msg="connecting to shim 96c231984cc2ac1ada2d22e47f1551e79223d745d8de887f6b4b2d6f98955975" address="unix:///run/containerd/s/e9e957c1bdf455fcf0b81bf2f527da7904927a065b02c44ff48a9ac2c2464140" protocol=ttrpc version=3 Jan 23 00:07:45.905915 systemd[1]: Started cri-containerd-96c231984cc2ac1ada2d22e47f1551e79223d745d8de887f6b4b2d6f98955975.scope - libcontainer container 96c231984cc2ac1ada2d22e47f1551e79223d745d8de887f6b4b2d6f98955975. Jan 23 00:07:46.041090 containerd[2014]: time="2026-01-23T00:07:46.040972872Z" level=info msg="StartContainer for \"96c231984cc2ac1ada2d22e47f1551e79223d745d8de887f6b4b2d6f98955975\" returns successfully" Jan 23 00:07:46.631455 kubelet[3616]: E0123 00:07:46.630822 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-d5rlp" podUID="73991cb4-51f1-4920-a4d2-a782912c4922" Jan 23 00:07:47.353073 systemd[1]: cri-containerd-96c231984cc2ac1ada2d22e47f1551e79223d745d8de887f6b4b2d6f98955975.scope: Deactivated successfully. Jan 23 00:07:47.355767 systemd[1]: cri-containerd-96c231984cc2ac1ada2d22e47f1551e79223d745d8de887f6b4b2d6f98955975.scope: Consumed 981ms CPU time, 185.3M memory peak, 165.9M written to disk. Jan 23 00:07:47.361736 containerd[2014]: time="2026-01-23T00:07:47.361645889Z" level=info msg="received container exit event container_id:\"96c231984cc2ac1ada2d22e47f1551e79223d745d8de887f6b4b2d6f98955975\" id:\"96c231984cc2ac1ada2d22e47f1551e79223d745d8de887f6b4b2d6f98955975\" pid:4401 exited_at:{seconds:1769126867 nanos:361272228}" Jan 23 00:07:47.411025 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-96c231984cc2ac1ada2d22e47f1551e79223d745d8de887f6b4b2d6f98955975-rootfs.mount: Deactivated successfully. Jan 23 00:07:47.418219 kubelet[3616]: I0123 00:07:47.416462 3616 kubelet_node_status.go:439] "Fast updating node status as it just became ready" Jan 23 00:07:47.548634 systemd[1]: Created slice kubepods-burstable-pod383114fa_f156_4be9_85ad_d3c3beab9901.slice - libcontainer container kubepods-burstable-pod383114fa_f156_4be9_85ad_d3c3beab9901.slice. Jan 23 00:07:47.580634 kubelet[3616]: I0123 00:07:47.580166 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6f6cm\" (UniqueName: \"kubernetes.io/projected/383114fa-f156-4be9-85ad-d3c3beab9901-kube-api-access-6f6cm\") pod \"coredns-66bc5c9577-6jc79\" (UID: \"383114fa-f156-4be9-85ad-d3c3beab9901\") " pod="kube-system/coredns-66bc5c9577-6jc79" Jan 23 00:07:47.580634 kubelet[3616]: I0123 00:07:47.580318 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/383114fa-f156-4be9-85ad-d3c3beab9901-config-volume\") pod \"coredns-66bc5c9577-6jc79\" (UID: \"383114fa-f156-4be9-85ad-d3c3beab9901\") " pod="kube-system/coredns-66bc5c9577-6jc79" Jan 23 00:07:47.585200 systemd[1]: Created slice kubepods-burstable-pod3a923c28_62e4_468e_8af7_41e647711ef9.slice - libcontainer container kubepods-burstable-pod3a923c28_62e4_468e_8af7_41e647711ef9.slice. Jan 23 00:07:47.630222 systemd[1]: Created slice kubepods-besteffort-pod6e08ee65_394c_47ae_9b9c_08be18fa8e62.slice - libcontainer container kubepods-besteffort-pod6e08ee65_394c_47ae_9b9c_08be18fa8e62.slice. Jan 23 00:07:47.666055 systemd[1]: Created slice kubepods-besteffort-pod7325a6f4_e6b9_4cb1_9e21_13aa088be606.slice - libcontainer container kubepods-besteffort-pod7325a6f4_e6b9_4cb1_9e21_13aa088be606.slice. Jan 23 00:07:47.681523 kubelet[3616]: I0123 00:07:47.681298 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/7325a6f4-e6b9-4cb1-9e21-13aa088be606-calico-apiserver-certs\") pod \"calico-apiserver-866c48949f-zlhcq\" (UID: \"7325a6f4-e6b9-4cb1-9e21-13aa088be606\") " pod="calico-apiserver/calico-apiserver-866c48949f-zlhcq" Jan 23 00:07:47.682660 kubelet[3616]: I0123 00:07:47.682311 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6e08ee65-394c-47ae-9b9c-08be18fa8e62-config\") pod \"goldmane-7c778bb748-h2nff\" (UID: \"6e08ee65-394c-47ae-9b9c-08be18fa8e62\") " pod="calico-system/goldmane-7c778bb748-h2nff" Jan 23 00:07:47.682660 kubelet[3616]: I0123 00:07:47.682419 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zxnqb\" (UniqueName: \"kubernetes.io/projected/3a923c28-62e4-468e-8af7-41e647711ef9-kube-api-access-zxnqb\") pod \"coredns-66bc5c9577-bm8wz\" (UID: \"3a923c28-62e4-468e-8af7-41e647711ef9\") " pod="kube-system/coredns-66bc5c9577-bm8wz" Jan 23 00:07:47.682660 kubelet[3616]: I0123 00:07:47.682463 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6e08ee65-394c-47ae-9b9c-08be18fa8e62-goldmane-ca-bundle\") pod \"goldmane-7c778bb748-h2nff\" (UID: \"6e08ee65-394c-47ae-9b9c-08be18fa8e62\") " pod="calico-system/goldmane-7c778bb748-h2nff" Jan 23 00:07:47.682660 kubelet[3616]: I0123 00:07:47.682570 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dx5nn\" (UniqueName: \"kubernetes.io/projected/7325a6f4-e6b9-4cb1-9e21-13aa088be606-kube-api-access-dx5nn\") pod \"calico-apiserver-866c48949f-zlhcq\" (UID: \"7325a6f4-e6b9-4cb1-9e21-13aa088be606\") " pod="calico-apiserver/calico-apiserver-866c48949f-zlhcq" Jan 23 00:07:47.682660 kubelet[3616]: I0123 00:07:47.682640 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/6e08ee65-394c-47ae-9b9c-08be18fa8e62-goldmane-key-pair\") pod \"goldmane-7c778bb748-h2nff\" (UID: \"6e08ee65-394c-47ae-9b9c-08be18fa8e62\") " pod="calico-system/goldmane-7c778bb748-h2nff" Jan 23 00:07:47.683058 kubelet[3616]: I0123 00:07:47.682690 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3a923c28-62e4-468e-8af7-41e647711ef9-config-volume\") pod \"coredns-66bc5c9577-bm8wz\" (UID: \"3a923c28-62e4-468e-8af7-41e647711ef9\") " pod="kube-system/coredns-66bc5c9577-bm8wz" Jan 23 00:07:47.683058 kubelet[3616]: I0123 00:07:47.682738 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-77rgh\" (UniqueName: \"kubernetes.io/projected/6e08ee65-394c-47ae-9b9c-08be18fa8e62-kube-api-access-77rgh\") pod \"goldmane-7c778bb748-h2nff\" (UID: \"6e08ee65-394c-47ae-9b9c-08be18fa8e62\") " pod="calico-system/goldmane-7c778bb748-h2nff" Jan 23 00:07:47.709468 systemd[1]: Created slice kubepods-besteffort-pod55e6514e_0b16_4f79_a408_29192627f17a.slice - libcontainer container kubepods-besteffort-pod55e6514e_0b16_4f79_a408_29192627f17a.slice. Jan 23 00:07:47.783090 systemd[1]: Created slice kubepods-besteffort-pod9783fed8_ce36_4bde_9a81_2ed0b850cd1e.slice - libcontainer container kubepods-besteffort-pod9783fed8_ce36_4bde_9a81_2ed0b850cd1e.slice. Jan 23 00:07:47.793032 kubelet[3616]: I0123 00:07:47.784083 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9783fed8-ce36-4bde-9a81-2ed0b850cd1e-calico-apiserver-certs\") pod \"calico-apiserver-866c48949f-lh2s6\" (UID: \"9783fed8-ce36-4bde-9a81-2ed0b850cd1e\") " pod="calico-apiserver/calico-apiserver-866c48949f-lh2s6" Jan 23 00:07:47.793032 kubelet[3616]: I0123 00:07:47.784182 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/55e6514e-0b16-4f79-a408-29192627f17a-whisker-backend-key-pair\") pod \"whisker-55d68664fb-jtx2x\" (UID: \"55e6514e-0b16-4f79-a408-29192627f17a\") " pod="calico-system/whisker-55d68664fb-jtx2x" Jan 23 00:07:47.793032 kubelet[3616]: I0123 00:07:47.784223 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55e6514e-0b16-4f79-a408-29192627f17a-whisker-ca-bundle\") pod \"whisker-55d68664fb-jtx2x\" (UID: \"55e6514e-0b16-4f79-a408-29192627f17a\") " pod="calico-system/whisker-55d68664fb-jtx2x" Jan 23 00:07:47.793032 kubelet[3616]: I0123 00:07:47.784265 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-79rfs\" (UniqueName: \"kubernetes.io/projected/55e6514e-0b16-4f79-a408-29192627f17a-kube-api-access-79rfs\") pod \"whisker-55d68664fb-jtx2x\" (UID: \"55e6514e-0b16-4f79-a408-29192627f17a\") " pod="calico-system/whisker-55d68664fb-jtx2x" Jan 23 00:07:47.793032 kubelet[3616]: I0123 00:07:47.784309 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4r2nl\" (UniqueName: \"kubernetes.io/projected/9783fed8-ce36-4bde-9a81-2ed0b850cd1e-kube-api-access-4r2nl\") pod \"calico-apiserver-866c48949f-lh2s6\" (UID: \"9783fed8-ce36-4bde-9a81-2ed0b850cd1e\") " pod="calico-apiserver/calico-apiserver-866c48949f-lh2s6" Jan 23 00:07:47.885621 kubelet[3616]: I0123 00:07:47.884951 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fb41eab3-a03e-4b48-bc83-fecd2d987e90-tigera-ca-bundle\") pod \"calico-kube-controllers-6f7974d7c8-hppng\" (UID: \"fb41eab3-a03e-4b48-bc83-fecd2d987e90\") " pod="calico-system/calico-kube-controllers-6f7974d7c8-hppng" Jan 23 00:07:47.885621 kubelet[3616]: I0123 00:07:47.885125 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g7h6l\" (UniqueName: \"kubernetes.io/projected/fb41eab3-a03e-4b48-bc83-fecd2d987e90-kube-api-access-g7h6l\") pod \"calico-kube-controllers-6f7974d7c8-hppng\" (UID: \"fb41eab3-a03e-4b48-bc83-fecd2d987e90\") " pod="calico-system/calico-kube-controllers-6f7974d7c8-hppng" Jan 23 00:07:47.932296 systemd[1]: Created slice kubepods-besteffort-podfb41eab3_a03e_4b48_bc83_fecd2d987e90.slice - libcontainer container kubepods-besteffort-podfb41eab3_a03e_4b48_bc83_fecd2d987e90.slice. Jan 23 00:07:47.940409 containerd[2014]: time="2026-01-23T00:07:47.940329716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6jc79,Uid:383114fa-f156-4be9-85ad-d3c3beab9901,Namespace:kube-system,Attempt:0,}" Jan 23 00:07:47.980630 containerd[2014]: time="2026-01-23T00:07:47.980237640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bm8wz,Uid:3a923c28-62e4-468e-8af7-41e647711ef9,Namespace:kube-system,Attempt:0,}" Jan 23 00:07:48.038417 containerd[2014]: time="2026-01-23T00:07:48.038350813Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866c48949f-zlhcq,Uid:7325a6f4-e6b9-4cb1-9e21-13aa088be606,Namespace:calico-apiserver,Attempt:0,}" Jan 23 00:07:48.092476 containerd[2014]: time="2026-01-23T00:07:48.092404989Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55d68664fb-jtx2x,Uid:55e6514e-0b16-4f79-a408-29192627f17a,Namespace:calico-system,Attempt:0,}" Jan 23 00:07:48.213389 containerd[2014]: time="2026-01-23T00:07:48.213011112Z" level=error msg="Failed to destroy network for sandbox \"67f1ed696d51ef81349ebb915c15b78e95ddf945b7e043ff492426f84d94f3e6\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:07:48.223397 containerd[2014]: time="2026-01-23T00:07:48.222999811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866c48949f-lh2s6,Uid:9783fed8-ce36-4bde-9a81-2ed0b850cd1e,Namespace:calico-apiserver,Attempt:0,}" Jan 23 00:07:48.247116 containerd[2014]: time="2026-01-23T00:07:48.247019907Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6jc79,Uid:383114fa-f156-4be9-85ad-d3c3beab9901,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"67f1ed696d51ef81349ebb915c15b78e95ddf945b7e043ff492426f84d94f3e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:07:48.248060 kubelet[3616]: E0123 00:07:48.247823 3616 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67f1ed696d51ef81349ebb915c15b78e95ddf945b7e043ff492426f84d94f3e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:07:48.248060 kubelet[3616]: E0123 00:07:48.247943 3616 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67f1ed696d51ef81349ebb915c15b78e95ddf945b7e043ff492426f84d94f3e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-6jc79" Jan 23 00:07:48.248060 kubelet[3616]: E0123 00:07:48.248004 3616 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"67f1ed696d51ef81349ebb915c15b78e95ddf945b7e043ff492426f84d94f3e6\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-6jc79" Jan 23 00:07:48.248960 kubelet[3616]: E0123 00:07:48.248546 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-6jc79_kube-system(383114fa-f156-4be9-85ad-d3c3beab9901)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-6jc79_kube-system(383114fa-f156-4be9-85ad-d3c3beab9901)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"67f1ed696d51ef81349ebb915c15b78e95ddf945b7e043ff492426f84d94f3e6\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-6jc79" podUID="383114fa-f156-4be9-85ad-d3c3beab9901" Jan 23 00:07:48.259181 containerd[2014]: time="2026-01-23T00:07:48.258589332Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f7974d7c8-hppng,Uid:fb41eab3-a03e-4b48-bc83-fecd2d987e90,Namespace:calico-system,Attempt:0,}" Jan 23 00:07:48.261651 containerd[2014]: time="2026-01-23T00:07:48.261575910Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-h2nff,Uid:6e08ee65-394c-47ae-9b9c-08be18fa8e62,Namespace:calico-system,Attempt:0,}" Jan 23 00:07:48.533735 containerd[2014]: time="2026-01-23T00:07:48.533352130Z" level=error msg="Failed to destroy network for sandbox \"9593e52177a1c63a85ebf27fa176ad54589107c94f31bd6c96aa2749020ff52f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:07:48.547945 containerd[2014]: time="2026-01-23T00:07:48.547445416Z" level=error msg="Failed to destroy network for sandbox \"17d86ac74ec6de9db2f26f1f47a35b959d4e5579894fe41b6c317f577e80c498\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:07:48.548565 systemd[1]: run-netns-cni\x2d73fad5f7\x2dfda6\x2d4b60\x2d64a0\x2d2a3362d85f68.mount: Deactivated successfully. Jan 23 00:07:48.557534 containerd[2014]: time="2026-01-23T00:07:48.553397595Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-55d68664fb-jtx2x,Uid:55e6514e-0b16-4f79-a408-29192627f17a,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"9593e52177a1c63a85ebf27fa176ad54589107c94f31bd6c96aa2749020ff52f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:07:48.559406 containerd[2014]: time="2026-01-23T00:07:48.559204058Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bm8wz,Uid:3a923c28-62e4-468e-8af7-41e647711ef9,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"17d86ac74ec6de9db2f26f1f47a35b959d4e5579894fe41b6c317f577e80c498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:07:48.560586 kubelet[3616]: E0123 00:07:48.559781 3616 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17d86ac74ec6de9db2f26f1f47a35b959d4e5579894fe41b6c317f577e80c498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:07:48.560586 kubelet[3616]: E0123 00:07:48.559856 3616 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17d86ac74ec6de9db2f26f1f47a35b959d4e5579894fe41b6c317f577e80c498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-bm8wz" Jan 23 00:07:48.560586 kubelet[3616]: E0123 00:07:48.559889 3616 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"17d86ac74ec6de9db2f26f1f47a35b959d4e5579894fe41b6c317f577e80c498\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-66bc5c9577-bm8wz" Jan 23 00:07:48.564431 kubelet[3616]: E0123 00:07:48.559986 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-66bc5c9577-bm8wz_kube-system(3a923c28-62e4-468e-8af7-41e647711ef9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-66bc5c9577-bm8wz_kube-system(3a923c28-62e4-468e-8af7-41e647711ef9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"17d86ac74ec6de9db2f26f1f47a35b959d4e5579894fe41b6c317f577e80c498\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-66bc5c9577-bm8wz" podUID="3a923c28-62e4-468e-8af7-41e647711ef9" Jan 23 00:07:48.564431 kubelet[3616]: E0123 00:07:48.561956 3616 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9593e52177a1c63a85ebf27fa176ad54589107c94f31bd6c96aa2749020ff52f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:07:48.564431 kubelet[3616]: E0123 00:07:48.562033 3616 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9593e52177a1c63a85ebf27fa176ad54589107c94f31bd6c96aa2749020ff52f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55d68664fb-jtx2x" Jan 23 00:07:48.563428 systemd[1]: run-netns-cni\x2de6f84739\x2d91be\x2dab29\x2d051b\x2da29044a8107f.mount: Deactivated successfully. Jan 23 00:07:48.564926 kubelet[3616]: E0123 00:07:48.562067 3616 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"9593e52177a1c63a85ebf27fa176ad54589107c94f31bd6c96aa2749020ff52f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-55d68664fb-jtx2x" Jan 23 00:07:48.564926 kubelet[3616]: E0123 00:07:48.562155 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-55d68664fb-jtx2x_calico-system(55e6514e-0b16-4f79-a408-29192627f17a)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-55d68664fb-jtx2x_calico-system(55e6514e-0b16-4f79-a408-29192627f17a)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"9593e52177a1c63a85ebf27fa176ad54589107c94f31bd6c96aa2749020ff52f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-55d68664fb-jtx2x" podUID="55e6514e-0b16-4f79-a408-29192627f17a" Jan 23 00:07:48.648917 systemd[1]: Created slice kubepods-besteffort-pod73991cb4_51f1_4920_a4d2_a782912c4922.slice - libcontainer container kubepods-besteffort-pod73991cb4_51f1_4920_a4d2_a782912c4922.slice. Jan 23 00:07:48.659791 containerd[2014]: time="2026-01-23T00:07:48.659701916Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d5rlp,Uid:73991cb4-51f1-4920-a4d2-a782912c4922,Namespace:calico-system,Attempt:0,}" Jan 23 00:07:48.669721 containerd[2014]: time="2026-01-23T00:07:48.669607665Z" level=error msg="Failed to destroy network for sandbox \"e9504bb1d4c578b01c6f687be9cdd2aced094c1569c61f9f31bdc13e7183afe0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:07:48.679528 containerd[2014]: time="2026-01-23T00:07:48.676632443Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f7974d7c8-hppng,Uid:fb41eab3-a03e-4b48-bc83-fecd2d987e90,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9504bb1d4c578b01c6f687be9cdd2aced094c1569c61f9f31bdc13e7183afe0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:07:48.679784 kubelet[3616]: E0123 00:07:48.677452 3616 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9504bb1d4c578b01c6f687be9cdd2aced094c1569c61f9f31bdc13e7183afe0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:07:48.679784 kubelet[3616]: E0123 00:07:48.679170 3616 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9504bb1d4c578b01c6f687be9cdd2aced094c1569c61f9f31bdc13e7183afe0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f7974d7c8-hppng" Jan 23 00:07:48.679784 kubelet[3616]: E0123 00:07:48.679246 3616 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"e9504bb1d4c578b01c6f687be9cdd2aced094c1569c61f9f31bdc13e7183afe0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-6f7974d7c8-hppng" Jan 23 00:07:48.686749 kubelet[3616]: E0123 00:07:48.681705 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-6f7974d7c8-hppng_calico-system(fb41eab3-a03e-4b48-bc83-fecd2d987e90)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-6f7974d7c8-hppng_calico-system(fb41eab3-a03e-4b48-bc83-fecd2d987e90)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"e9504bb1d4c578b01c6f687be9cdd2aced094c1569c61f9f31bdc13e7183afe0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-6f7974d7c8-hppng" podUID="fb41eab3-a03e-4b48-bc83-fecd2d987e90" Jan 23 00:07:48.692920 systemd[1]: run-netns-cni\x2d9d01dd4b\x2d2a4e\x2d126e\x2d842c\x2db2b2b635b1f7.mount: Deactivated successfully. Jan 23 00:07:48.708174 containerd[2014]: time="2026-01-23T00:07:48.708062372Z" level=error msg="Failed to destroy network for sandbox \"7ef6dcf88556de3fb056c0ebcf5208903e118f4a2f931d25474983a4edab0c3d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:07:48.708874 containerd[2014]: time="2026-01-23T00:07:48.708796225Z" level=error msg="Failed to destroy network for sandbox \"503974c6ad236a2ef6a3d89ccf9d6a2d199a744c059f75698149c22a212c302a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:07:48.710621 containerd[2014]: time="2026-01-23T00:07:48.710450066Z" level=error msg="Failed to destroy network for sandbox \"609dcf4b8a55878222a49c5f35e84779fb2d9c09c0d7a60f18248cfb902c992d\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:07:48.710887 containerd[2014]: time="2026-01-23T00:07:48.710816374Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866c48949f-zlhcq,Uid:7325a6f4-e6b9-4cb1-9e21-13aa088be606,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"503974c6ad236a2ef6a3d89ccf9d6a2d199a744c059f75698149c22a212c302a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:07:48.711724 kubelet[3616]: E0123 00:07:48.711657 3616 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"503974c6ad236a2ef6a3d89ccf9d6a2d199a744c059f75698149c22a212c302a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:07:48.711973 kubelet[3616]: E0123 00:07:48.711743 3616 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"503974c6ad236a2ef6a3d89ccf9d6a2d199a744c059f75698149c22a212c302a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-866c48949f-zlhcq" Jan 23 00:07:48.711973 kubelet[3616]: E0123 00:07:48.711777 3616 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"503974c6ad236a2ef6a3d89ccf9d6a2d199a744c059f75698149c22a212c302a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-866c48949f-zlhcq" Jan 23 00:07:48.711973 kubelet[3616]: E0123 00:07:48.711873 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-866c48949f-zlhcq_calico-apiserver(7325a6f4-e6b9-4cb1-9e21-13aa088be606)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-866c48949f-zlhcq_calico-apiserver(7325a6f4-e6b9-4cb1-9e21-13aa088be606)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"503974c6ad236a2ef6a3d89ccf9d6a2d199a744c059f75698149c22a212c302a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-866c48949f-zlhcq" podUID="7325a6f4-e6b9-4cb1-9e21-13aa088be606" Jan 23 00:07:48.715336 containerd[2014]: time="2026-01-23T00:07:48.714616505Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-h2nff,Uid:6e08ee65-394c-47ae-9b9c-08be18fa8e62,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ef6dcf88556de3fb056c0ebcf5208903e118f4a2f931d25474983a4edab0c3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:07:48.715590 kubelet[3616]: E0123 00:07:48.714935 3616 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ef6dcf88556de3fb056c0ebcf5208903e118f4a2f931d25474983a4edab0c3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:07:48.715590 kubelet[3616]: E0123 00:07:48.715011 3616 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ef6dcf88556de3fb056c0ebcf5208903e118f4a2f931d25474983a4edab0c3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-h2nff" Jan 23 00:07:48.715590 kubelet[3616]: E0123 00:07:48.715244 3616 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7ef6dcf88556de3fb056c0ebcf5208903e118f4a2f931d25474983a4edab0c3d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-7c778bb748-h2nff" Jan 23 00:07:48.715961 kubelet[3616]: E0123 00:07:48.715361 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-7c778bb748-h2nff_calico-system(6e08ee65-394c-47ae-9b9c-08be18fa8e62)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-7c778bb748-h2nff_calico-system(6e08ee65-394c-47ae-9b9c-08be18fa8e62)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7ef6dcf88556de3fb056c0ebcf5208903e118f4a2f931d25474983a4edab0c3d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-7c778bb748-h2nff" podUID="6e08ee65-394c-47ae-9b9c-08be18fa8e62" Jan 23 00:07:48.716381 containerd[2014]: time="2026-01-23T00:07:48.715809752Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866c48949f-lh2s6,Uid:9783fed8-ce36-4bde-9a81-2ed0b850cd1e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"609dcf4b8a55878222a49c5f35e84779fb2d9c09c0d7a60f18248cfb902c992d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:07:48.716863 kubelet[3616]: E0123 00:07:48.716581 3616 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"609dcf4b8a55878222a49c5f35e84779fb2d9c09c0d7a60f18248cfb902c992d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:07:48.716863 kubelet[3616]: E0123 00:07:48.716672 3616 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"609dcf4b8a55878222a49c5f35e84779fb2d9c09c0d7a60f18248cfb902c992d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-866c48949f-lh2s6" Jan 23 00:07:48.716863 kubelet[3616]: E0123 00:07:48.716716 3616 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"609dcf4b8a55878222a49c5f35e84779fb2d9c09c0d7a60f18248cfb902c992d\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-866c48949f-lh2s6" Jan 23 00:07:48.717156 kubelet[3616]: E0123 00:07:48.716799 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-866c48949f-lh2s6_calico-apiserver(9783fed8-ce36-4bde-9a81-2ed0b850cd1e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-866c48949f-lh2s6_calico-apiserver(9783fed8-ce36-4bde-9a81-2ed0b850cd1e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"609dcf4b8a55878222a49c5f35e84779fb2d9c09c0d7a60f18248cfb902c992d\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-866c48949f-lh2s6" podUID="9783fed8-ce36-4bde-9a81-2ed0b850cd1e" Jan 23 00:07:48.798841 containerd[2014]: time="2026-01-23T00:07:48.798401096Z" level=error msg="Failed to destroy network for sandbox \"c0a2e1c994554fb8c13a32f4c7c443021c4384f607bdb39c0a5703f6ff3c1340\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:07:48.800379 containerd[2014]: time="2026-01-23T00:07:48.800227878Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d5rlp,Uid:73991cb4-51f1-4920-a4d2-a782912c4922,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0a2e1c994554fb8c13a32f4c7c443021c4384f607bdb39c0a5703f6ff3c1340\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:07:48.800712 kubelet[3616]: E0123 00:07:48.800581 3616 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0a2e1c994554fb8c13a32f4c7c443021c4384f607bdb39c0a5703f6ff3c1340\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 23 00:07:48.800712 kubelet[3616]: E0123 00:07:48.800667 3616 kuberuntime_sandbox.go:71] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0a2e1c994554fb8c13a32f4c7c443021c4384f607bdb39c0a5703f6ff3c1340\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d5rlp" Jan 23 00:07:48.800915 kubelet[3616]: E0123 00:07:48.800703 3616 kuberuntime_manager.go:1343] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c0a2e1c994554fb8c13a32f4c7c443021c4384f607bdb39c0a5703f6ff3c1340\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-d5rlp" Jan 23 00:07:48.800915 kubelet[3616]: E0123 00:07:48.800789 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-d5rlp_calico-system(73991cb4-51f1-4920-a4d2-a782912c4922)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-d5rlp_calico-system(73991cb4-51f1-4920-a4d2-a782912c4922)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c0a2e1c994554fb8c13a32f4c7c443021c4384f607bdb39c0a5703f6ff3c1340\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-d5rlp" podUID="73991cb4-51f1-4920-a4d2-a782912c4922" Jan 23 00:07:48.978327 containerd[2014]: time="2026-01-23T00:07:48.978259966Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 23 00:07:49.407828 systemd[1]: run-netns-cni\x2d9a3c1d8b\x2d39ae\x2db71a\x2d0991\x2d17689f309364.mount: Deactivated successfully. Jan 23 00:07:49.408004 systemd[1]: run-netns-cni\x2da320f795\x2d7d78\x2dd330\x2d750e\x2dc07b82fda551.mount: Deactivated successfully. Jan 23 00:07:49.408124 systemd[1]: run-netns-cni\x2dccde8b19\x2d6fe1\x2dc2d4\x2d070a\x2d300005bee0bb.mount: Deactivated successfully. Jan 23 00:07:49.408242 systemd[1]: run-netns-cni\x2d8493516d\x2d0949\x2d09ef\x2df41d\x2dd388b7a4d6c9.mount: Deactivated successfully. Jan 23 00:07:54.947834 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1653701106.mount: Deactivated successfully. Jan 23 00:07:55.029907 containerd[2014]: time="2026-01-23T00:07:55.029792720Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:55.031547 containerd[2014]: time="2026-01-23T00:07:55.031459251Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150934562" Jan 23 00:07:55.034201 containerd[2014]: time="2026-01-23T00:07:55.033847424Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:55.039085 containerd[2014]: time="2026-01-23T00:07:55.039028640Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 23 00:07:55.040296 containerd[2014]: time="2026-01-23T00:07:55.040053408Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 6.06172186s" Jan 23 00:07:55.040296 containerd[2014]: time="2026-01-23T00:07:55.040114181Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 23 00:07:55.087841 containerd[2014]: time="2026-01-23T00:07:55.087767926Z" level=info msg="CreateContainer within sandbox \"aaa6666247f8792d93a6df8af1bf879f90b4730fe2ccafa38e2a9e94b56241af\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 23 00:07:55.109528 containerd[2014]: time="2026-01-23T00:07:55.105085931Z" level=info msg="Container 4d1a821c04b4edb148035440fe4f048885e5bec6244b1295138e0f9ad89ea803: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:07:55.114955 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4091963.mount: Deactivated successfully. Jan 23 00:07:55.132990 containerd[2014]: time="2026-01-23T00:07:55.132808515Z" level=info msg="CreateContainer within sandbox \"aaa6666247f8792d93a6df8af1bf879f90b4730fe2ccafa38e2a9e94b56241af\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"4d1a821c04b4edb148035440fe4f048885e5bec6244b1295138e0f9ad89ea803\"" Jan 23 00:07:55.133888 containerd[2014]: time="2026-01-23T00:07:55.133774644Z" level=info msg="StartContainer for \"4d1a821c04b4edb148035440fe4f048885e5bec6244b1295138e0f9ad89ea803\"" Jan 23 00:07:55.137739 containerd[2014]: time="2026-01-23T00:07:55.137592994Z" level=info msg="connecting to shim 4d1a821c04b4edb148035440fe4f048885e5bec6244b1295138e0f9ad89ea803" address="unix:///run/containerd/s/e9e957c1bdf455fcf0b81bf2f527da7904927a065b02c44ff48a9ac2c2464140" protocol=ttrpc version=3 Jan 23 00:07:55.212163 systemd[1]: Started cri-containerd-4d1a821c04b4edb148035440fe4f048885e5bec6244b1295138e0f9ad89ea803.scope - libcontainer container 4d1a821c04b4edb148035440fe4f048885e5bec6244b1295138e0f9ad89ea803. Jan 23 00:07:55.330850 containerd[2014]: time="2026-01-23T00:07:55.330802385Z" level=info msg="StartContainer for \"4d1a821c04b4edb148035440fe4f048885e5bec6244b1295138e0f9ad89ea803\" returns successfully" Jan 23 00:07:55.698956 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 23 00:07:55.699131 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 23 00:07:55.962215 kubelet[3616]: I0123 00:07:55.962049 3616 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55e6514e-0b16-4f79-a408-29192627f17a-whisker-ca-bundle\") pod \"55e6514e-0b16-4f79-a408-29192627f17a\" (UID: \"55e6514e-0b16-4f79-a408-29192627f17a\") " Jan 23 00:07:55.962215 kubelet[3616]: I0123 00:07:55.962126 3616 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/55e6514e-0b16-4f79-a408-29192627f17a-whisker-backend-key-pair\") pod \"55e6514e-0b16-4f79-a408-29192627f17a\" (UID: \"55e6514e-0b16-4f79-a408-29192627f17a\") " Jan 23 00:07:55.965175 kubelet[3616]: I0123 00:07:55.962225 3616 reconciler_common.go:163] "operationExecutor.UnmountVolume started for volume \"kube-api-access-79rfs\" (UniqueName: \"kubernetes.io/projected/55e6514e-0b16-4f79-a408-29192627f17a-kube-api-access-79rfs\") pod \"55e6514e-0b16-4f79-a408-29192627f17a\" (UID: \"55e6514e-0b16-4f79-a408-29192627f17a\") " Jan 23 00:07:55.965175 kubelet[3616]: I0123 00:07:55.964336 3616 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/55e6514e-0b16-4f79-a408-29192627f17a-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "55e6514e-0b16-4f79-a408-29192627f17a" (UID: "55e6514e-0b16-4f79-a408-29192627f17a"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 00:07:55.978558 kubelet[3616]: I0123 00:07:55.977408 3616 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/55e6514e-0b16-4f79-a408-29192627f17a-kube-api-access-79rfs" (OuterVolumeSpecName: "kube-api-access-79rfs") pod "55e6514e-0b16-4f79-a408-29192627f17a" (UID: "55e6514e-0b16-4f79-a408-29192627f17a"). InnerVolumeSpecName "kube-api-access-79rfs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 00:07:55.977803 systemd[1]: var-lib-kubelet-pods-55e6514e\x2d0b16\x2d4f79\x2da408\x2d29192627f17a-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d79rfs.mount: Deactivated successfully. Jan 23 00:07:55.987699 kubelet[3616]: I0123 00:07:55.987615 3616 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/55e6514e-0b16-4f79-a408-29192627f17a-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "55e6514e-0b16-4f79-a408-29192627f17a" (UID: "55e6514e-0b16-4f79-a408-29192627f17a"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 00:07:55.988955 systemd[1]: var-lib-kubelet-pods-55e6514e\x2d0b16\x2d4f79\x2da408\x2d29192627f17a-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 23 00:07:56.048568 systemd[1]: Removed slice kubepods-besteffort-pod55e6514e_0b16_4f79_a408_29192627f17a.slice - libcontainer container kubepods-besteffort-pod55e6514e_0b16_4f79_a408_29192627f17a.slice. Jan 23 00:07:56.064692 kubelet[3616]: I0123 00:07:56.063890 3616 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/55e6514e-0b16-4f79-a408-29192627f17a-whisker-ca-bundle\") on node \"ip-172-31-18-130\" DevicePath \"\"" Jan 23 00:07:56.064692 kubelet[3616]: I0123 00:07:56.063938 3616 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/55e6514e-0b16-4f79-a408-29192627f17a-whisker-backend-key-pair\") on node \"ip-172-31-18-130\" DevicePath \"\"" Jan 23 00:07:56.064692 kubelet[3616]: I0123 00:07:56.063962 3616 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-79rfs\" (UniqueName: \"kubernetes.io/projected/55e6514e-0b16-4f79-a408-29192627f17a-kube-api-access-79rfs\") on node \"ip-172-31-18-130\" DevicePath \"\"" Jan 23 00:07:56.119343 kubelet[3616]: I0123 00:07:56.116953 3616 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-zxwhw" podStartSLOduration=1.8392433449999999 podStartE2EDuration="18.11693199s" podCreationTimestamp="2026-01-23 00:07:38 +0000 UTC" firstStartedPulling="2026-01-23 00:07:38.764369656 +0000 UTC m=+31.390606951" lastFinishedPulling="2026-01-23 00:07:55.042058301 +0000 UTC m=+47.668295596" observedRunningTime="2026-01-23 00:07:56.080350986 +0000 UTC m=+48.706588293" watchObservedRunningTime="2026-01-23 00:07:56.11693199 +0000 UTC m=+48.743169285" Jan 23 00:07:56.230289 systemd[1]: Created slice kubepods-besteffort-pod0b4bad72_057e_4231_8c95_8f0d608e570d.slice - libcontainer container kubepods-besteffort-pod0b4bad72_057e_4231_8c95_8f0d608e570d.slice. Jan 23 00:07:56.265219 kubelet[3616]: I0123 00:07:56.265146 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/0b4bad72-057e-4231-8c95-8f0d608e570d-whisker-backend-key-pair\") pod \"whisker-bd87c786-pqgwc\" (UID: \"0b4bad72-057e-4231-8c95-8f0d608e570d\") " pod="calico-system/whisker-bd87c786-pqgwc" Jan 23 00:07:56.265386 kubelet[3616]: I0123 00:07:56.265226 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/0b4bad72-057e-4231-8c95-8f0d608e570d-whisker-ca-bundle\") pod \"whisker-bd87c786-pqgwc\" (UID: \"0b4bad72-057e-4231-8c95-8f0d608e570d\") " pod="calico-system/whisker-bd87c786-pqgwc" Jan 23 00:07:56.265386 kubelet[3616]: I0123 00:07:56.265266 3616 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zw59k\" (UniqueName: \"kubernetes.io/projected/0b4bad72-057e-4231-8c95-8f0d608e570d-kube-api-access-zw59k\") pod \"whisker-bd87c786-pqgwc\" (UID: \"0b4bad72-057e-4231-8c95-8f0d608e570d\") " pod="calico-system/whisker-bd87c786-pqgwc" Jan 23 00:07:56.546533 containerd[2014]: time="2026-01-23T00:07:56.546236352Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bd87c786-pqgwc,Uid:0b4bad72-057e-4231-8c95-8f0d608e570d,Namespace:calico-system,Attempt:0,}" Jan 23 00:07:57.640911 kubelet[3616]: I0123 00:07:57.640858 3616 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="55e6514e-0b16-4f79-a408-29192627f17a" path="/var/lib/kubelet/pods/55e6514e-0b16-4f79-a408-29192627f17a/volumes" Jan 23 00:07:58.195849 (udev-worker)[4692]: Network interface NamePolicy= disabled on kernel command line. Jan 23 00:07:58.200812 systemd-networkd[1822]: cali81c8552bdf2: Link UP Jan 23 00:07:58.204165 systemd-networkd[1822]: cali81c8552bdf2: Gained carrier Jan 23 00:07:58.385437 containerd[2014]: 2026-01-23 00:07:56.666 [INFO][4734] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 23 00:07:58.385437 containerd[2014]: 2026-01-23 00:07:57.856 [INFO][4734] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--130-k8s-whisker--bd87c786--pqgwc-eth0 whisker-bd87c786- calico-system 0b4bad72-057e-4231-8c95-8f0d608e570d 928 0 2026-01-23 00:07:56 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:bd87c786 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ip-172-31-18-130 whisker-bd87c786-pqgwc eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali81c8552bdf2 [] [] }} ContainerID="70bb580763040159f62bed8509169da56e56bb912df8f834fc0b541bec7b32af" Namespace="calico-system" Pod="whisker-bd87c786-pqgwc" WorkloadEndpoint="ip--172--31--18--130-k8s-whisker--bd87c786--pqgwc-" Jan 23 00:07:58.385437 containerd[2014]: 2026-01-23 00:07:57.856 [INFO][4734] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="70bb580763040159f62bed8509169da56e56bb912df8f834fc0b541bec7b32af" Namespace="calico-system" Pod="whisker-bd87c786-pqgwc" WorkloadEndpoint="ip--172--31--18--130-k8s-whisker--bd87c786--pqgwc-eth0" Jan 23 00:07:58.385437 containerd[2014]: 2026-01-23 00:07:57.951 [INFO][4864] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="70bb580763040159f62bed8509169da56e56bb912df8f834fc0b541bec7b32af" HandleID="k8s-pod-network.70bb580763040159f62bed8509169da56e56bb912df8f834fc0b541bec7b32af" Workload="ip--172--31--18--130-k8s-whisker--bd87c786--pqgwc-eth0" Jan 23 00:07:58.386216 containerd[2014]: 2026-01-23 00:07:57.952 [INFO][4864] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="70bb580763040159f62bed8509169da56e56bb912df8f834fc0b541bec7b32af" HandleID="k8s-pod-network.70bb580763040159f62bed8509169da56e56bb912df8f834fc0b541bec7b32af" Workload="ip--172--31--18--130-k8s-whisker--bd87c786--pqgwc-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000331640), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-130", "pod":"whisker-bd87c786-pqgwc", "timestamp":"2026-01-23 00:07:57.951754313 +0000 UTC"}, Hostname:"ip-172-31-18-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 00:07:58.386216 containerd[2014]: 2026-01-23 00:07:57.952 [INFO][4864] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 00:07:58.386216 containerd[2014]: 2026-01-23 00:07:57.952 [INFO][4864] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 00:07:58.386216 containerd[2014]: 2026-01-23 00:07:57.952 [INFO][4864] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-130' Jan 23 00:07:58.386216 containerd[2014]: 2026-01-23 00:07:57.974 [INFO][4864] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.70bb580763040159f62bed8509169da56e56bb912df8f834fc0b541bec7b32af" host="ip-172-31-18-130" Jan 23 00:07:58.386216 containerd[2014]: 2026-01-23 00:07:57.990 [INFO][4864] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-130" Jan 23 00:07:58.386216 containerd[2014]: 2026-01-23 00:07:58.005 [INFO][4864] ipam/ipam.go 511: Trying affinity for 192.168.44.64/26 host="ip-172-31-18-130" Jan 23 00:07:58.386216 containerd[2014]: 2026-01-23 00:07:58.011 [INFO][4864] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.64/26 host="ip-172-31-18-130" Jan 23 00:07:58.386216 containerd[2014]: 2026-01-23 00:07:58.015 [INFO][4864] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ip-172-31-18-130" Jan 23 00:07:58.390016 containerd[2014]: 2026-01-23 00:07:58.015 [INFO][4864] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.70bb580763040159f62bed8509169da56e56bb912df8f834fc0b541bec7b32af" host="ip-172-31-18-130" Jan 23 00:07:58.390016 containerd[2014]: 2026-01-23 00:07:58.022 [INFO][4864] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.70bb580763040159f62bed8509169da56e56bb912df8f834fc0b541bec7b32af Jan 23 00:07:58.390016 containerd[2014]: 2026-01-23 00:07:58.032 [INFO][4864] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.70bb580763040159f62bed8509169da56e56bb912df8f834fc0b541bec7b32af" host="ip-172-31-18-130" Jan 23 00:07:58.390016 containerd[2014]: 2026-01-23 00:07:58.044 [INFO][4864] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.44.65/26] block=192.168.44.64/26 handle="k8s-pod-network.70bb580763040159f62bed8509169da56e56bb912df8f834fc0b541bec7b32af" host="ip-172-31-18-130" Jan 23 00:07:58.390016 containerd[2014]: 2026-01-23 00:07:58.044 [INFO][4864] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.65/26] handle="k8s-pod-network.70bb580763040159f62bed8509169da56e56bb912df8f834fc0b541bec7b32af" host="ip-172-31-18-130" Jan 23 00:07:58.390016 containerd[2014]: 2026-01-23 00:07:58.045 [INFO][4864] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 00:07:58.390016 containerd[2014]: 2026-01-23 00:07:58.045 [INFO][4864] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.44.65/26] IPv6=[] ContainerID="70bb580763040159f62bed8509169da56e56bb912df8f834fc0b541bec7b32af" HandleID="k8s-pod-network.70bb580763040159f62bed8509169da56e56bb912df8f834fc0b541bec7b32af" Workload="ip--172--31--18--130-k8s-whisker--bd87c786--pqgwc-eth0" Jan 23 00:07:58.390376 containerd[2014]: 2026-01-23 00:07:58.061 [INFO][4734] cni-plugin/k8s.go 418: Populated endpoint ContainerID="70bb580763040159f62bed8509169da56e56bb912df8f834fc0b541bec7b32af" Namespace="calico-system" Pod="whisker-bd87c786-pqgwc" WorkloadEndpoint="ip--172--31--18--130-k8s-whisker--bd87c786--pqgwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-whisker--bd87c786--pqgwc-eth0", GenerateName:"whisker-bd87c786-", Namespace:"calico-system", SelfLink:"", UID:"0b4bad72-057e-4231-8c95-8f0d608e570d", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"bd87c786", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"", Pod:"whisker-bd87c786-pqgwc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.44.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali81c8552bdf2", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:07:58.390376 containerd[2014]: 2026-01-23 00:07:58.061 [INFO][4734] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.65/32] ContainerID="70bb580763040159f62bed8509169da56e56bb912df8f834fc0b541bec7b32af" Namespace="calico-system" Pod="whisker-bd87c786-pqgwc" WorkloadEndpoint="ip--172--31--18--130-k8s-whisker--bd87c786--pqgwc-eth0" Jan 23 00:07:58.392826 containerd[2014]: 2026-01-23 00:07:58.061 [INFO][4734] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali81c8552bdf2 ContainerID="70bb580763040159f62bed8509169da56e56bb912df8f834fc0b541bec7b32af" Namespace="calico-system" Pod="whisker-bd87c786-pqgwc" WorkloadEndpoint="ip--172--31--18--130-k8s-whisker--bd87c786--pqgwc-eth0" Jan 23 00:07:58.392826 containerd[2014]: 2026-01-23 00:07:58.250 [INFO][4734] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="70bb580763040159f62bed8509169da56e56bb912df8f834fc0b541bec7b32af" Namespace="calico-system" Pod="whisker-bd87c786-pqgwc" WorkloadEndpoint="ip--172--31--18--130-k8s-whisker--bd87c786--pqgwc-eth0" Jan 23 00:07:58.392965 containerd[2014]: 2026-01-23 00:07:58.251 [INFO][4734] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="70bb580763040159f62bed8509169da56e56bb912df8f834fc0b541bec7b32af" Namespace="calico-system" Pod="whisker-bd87c786-pqgwc" WorkloadEndpoint="ip--172--31--18--130-k8s-whisker--bd87c786--pqgwc-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-whisker--bd87c786--pqgwc-eth0", GenerateName:"whisker-bd87c786-", Namespace:"calico-system", SelfLink:"", UID:"0b4bad72-057e-4231-8c95-8f0d608e570d", ResourceVersion:"928", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 7, 56, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"bd87c786", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"70bb580763040159f62bed8509169da56e56bb912df8f834fc0b541bec7b32af", Pod:"whisker-bd87c786-pqgwc", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.44.65/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali81c8552bdf2", MAC:"ce:50:e7:60:39:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:07:58.393108 containerd[2014]: 2026-01-23 00:07:58.378 [INFO][4734] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="70bb580763040159f62bed8509169da56e56bb912df8f834fc0b541bec7b32af" Namespace="calico-system" Pod="whisker-bd87c786-pqgwc" WorkloadEndpoint="ip--172--31--18--130-k8s-whisker--bd87c786--pqgwc-eth0" Jan 23 00:07:58.478809 containerd[2014]: time="2026-01-23T00:07:58.478457931Z" level=info msg="connecting to shim 70bb580763040159f62bed8509169da56e56bb912df8f834fc0b541bec7b32af" address="unix:///run/containerd/s/01a7c878e8dca33448483e5d8d168fa3946bef88add6e72e1f1d899634726234" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:07:58.575918 systemd[1]: Started cri-containerd-70bb580763040159f62bed8509169da56e56bb912df8f834fc0b541bec7b32af.scope - libcontainer container 70bb580763040159f62bed8509169da56e56bb912df8f834fc0b541bec7b32af. Jan 23 00:07:58.779056 containerd[2014]: time="2026-01-23T00:07:58.778832934Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-bd87c786-pqgwc,Uid:0b4bad72-057e-4231-8c95-8f0d608e570d,Namespace:calico-system,Attempt:0,} returns sandbox id \"70bb580763040159f62bed8509169da56e56bb912df8f834fc0b541bec7b32af\"" Jan 23 00:07:58.785294 containerd[2014]: time="2026-01-23T00:07:58.785205610Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 00:07:59.092520 containerd[2014]: time="2026-01-23T00:07:59.092421462Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:07:59.094767 containerd[2014]: time="2026-01-23T00:07:59.094662085Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 00:07:59.094767 containerd[2014]: time="2026-01-23T00:07:59.094728844Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 00:07:59.095436 kubelet[3616]: E0123 00:07:59.095064 3616 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 00:07:59.095436 kubelet[3616]: E0123 00:07:59.095152 3616 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 00:07:59.102955 kubelet[3616]: E0123 00:07:59.102862 3616 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-bd87c786-pqgwc_calico-system(0b4bad72-057e-4231-8c95-8f0d608e570d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 00:07:59.107317 containerd[2014]: time="2026-01-23T00:07:59.107270814Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 00:07:59.225010 systemd-networkd[1822]: vxlan.calico: Link UP Jan 23 00:07:59.225024 systemd-networkd[1822]: vxlan.calico: Gained carrier Jan 23 00:07:59.293871 (udev-worker)[4691]: Network interface NamePolicy= disabled on kernel command line. Jan 23 00:07:59.390551 containerd[2014]: time="2026-01-23T00:07:59.390044417Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:07:59.392246 containerd[2014]: time="2026-01-23T00:07:59.391591343Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 00:07:59.392246 containerd[2014]: time="2026-01-23T00:07:59.391715181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 00:07:59.392386 kubelet[3616]: E0123 00:07:59.391955 3616 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 00:07:59.392386 kubelet[3616]: E0123 00:07:59.392015 3616 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 00:07:59.392386 kubelet[3616]: E0123 00:07:59.392123 3616 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-bd87c786-pqgwc_calico-system(0b4bad72-057e-4231-8c95-8f0d608e570d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 00:07:59.393382 kubelet[3616]: E0123 00:07:59.392193 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bd87c786-pqgwc" podUID="0b4bad72-057e-4231-8c95-8f0d608e570d" Jan 23 00:07:59.635631 containerd[2014]: time="2026-01-23T00:07:59.635532704Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bm8wz,Uid:3a923c28-62e4-468e-8af7-41e647711ef9,Namespace:kube-system,Attempt:0,}" Jan 23 00:07:59.639552 containerd[2014]: time="2026-01-23T00:07:59.638730137Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866c48949f-lh2s6,Uid:9783fed8-ce36-4bde-9a81-2ed0b850cd1e,Namespace:calico-apiserver,Attempt:0,}" Jan 23 00:07:59.641595 containerd[2014]: time="2026-01-23T00:07:59.641363047Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-h2nff,Uid:6e08ee65-394c-47ae-9b9c-08be18fa8e62,Namespace:calico-system,Attempt:0,}" Jan 23 00:07:59.905784 systemd-networkd[1822]: cali81c8552bdf2: Gained IPv6LL Jan 23 00:08:00.047676 kubelet[3616]: E0123 00:08:00.047562 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bd87c786-pqgwc" podUID="0b4bad72-057e-4231-8c95-8f0d608e570d" Jan 23 00:08:00.158845 systemd-networkd[1822]: cali196c20e7785: Link UP Jan 23 00:08:00.161989 systemd-networkd[1822]: cali196c20e7785: Gained carrier Jan 23 00:08:00.215446 containerd[2014]: 2026-01-23 00:07:59.838 [INFO][5002] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--130-k8s-coredns--66bc5c9577--bm8wz-eth0 coredns-66bc5c9577- kube-system 3a923c28-62e4-468e-8af7-41e647711ef9 863 0 2026-01-23 00:07:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-130 coredns-66bc5c9577-bm8wz eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali196c20e7785 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="f557770a6214a09075de66bc135aaa14c3055bfa6aea75106829910a733e2ef9" Namespace="kube-system" Pod="coredns-66bc5c9577-bm8wz" WorkloadEndpoint="ip--172--31--18--130-k8s-coredns--66bc5c9577--bm8wz-" Jan 23 00:08:00.215446 containerd[2014]: 2026-01-23 00:07:59.838 [INFO][5002] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f557770a6214a09075de66bc135aaa14c3055bfa6aea75106829910a733e2ef9" Namespace="kube-system" Pod="coredns-66bc5c9577-bm8wz" WorkloadEndpoint="ip--172--31--18--130-k8s-coredns--66bc5c9577--bm8wz-eth0" Jan 23 00:08:00.215446 containerd[2014]: 2026-01-23 00:07:59.949 [INFO][5054] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f557770a6214a09075de66bc135aaa14c3055bfa6aea75106829910a733e2ef9" HandleID="k8s-pod-network.f557770a6214a09075de66bc135aaa14c3055bfa6aea75106829910a733e2ef9" Workload="ip--172--31--18--130-k8s-coredns--66bc5c9577--bm8wz-eth0" Jan 23 00:08:00.215921 containerd[2014]: 2026-01-23 00:07:59.954 [INFO][5054] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f557770a6214a09075de66bc135aaa14c3055bfa6aea75106829910a733e2ef9" HandleID="k8s-pod-network.f557770a6214a09075de66bc135aaa14c3055bfa6aea75106829910a733e2ef9" Workload="ip--172--31--18--130-k8s-coredns--66bc5c9577--bm8wz-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002b4800), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-130", "pod":"coredns-66bc5c9577-bm8wz", "timestamp":"2026-01-23 00:07:59.949204267 +0000 UTC"}, Hostname:"ip-172-31-18-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 00:08:00.215921 containerd[2014]: 2026-01-23 00:07:59.954 [INFO][5054] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 00:08:00.215921 containerd[2014]: 2026-01-23 00:07:59.955 [INFO][5054] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 00:08:00.215921 containerd[2014]: 2026-01-23 00:07:59.955 [INFO][5054] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-130' Jan 23 00:08:00.215921 containerd[2014]: 2026-01-23 00:08:00.001 [INFO][5054] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f557770a6214a09075de66bc135aaa14c3055bfa6aea75106829910a733e2ef9" host="ip-172-31-18-130" Jan 23 00:08:00.215921 containerd[2014]: 2026-01-23 00:08:00.024 [INFO][5054] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-130" Jan 23 00:08:00.215921 containerd[2014]: 2026-01-23 00:08:00.057 [INFO][5054] ipam/ipam.go 511: Trying affinity for 192.168.44.64/26 host="ip-172-31-18-130" Jan 23 00:08:00.215921 containerd[2014]: 2026-01-23 00:08:00.075 [INFO][5054] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.64/26 host="ip-172-31-18-130" Jan 23 00:08:00.215921 containerd[2014]: 2026-01-23 00:08:00.095 [INFO][5054] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ip-172-31-18-130" Jan 23 00:08:00.218455 containerd[2014]: 2026-01-23 00:08:00.095 [INFO][5054] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.f557770a6214a09075de66bc135aaa14c3055bfa6aea75106829910a733e2ef9" host="ip-172-31-18-130" Jan 23 00:08:00.218455 containerd[2014]: 2026-01-23 00:08:00.102 [INFO][5054] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f557770a6214a09075de66bc135aaa14c3055bfa6aea75106829910a733e2ef9 Jan 23 00:08:00.218455 containerd[2014]: 2026-01-23 00:08:00.132 [INFO][5054] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.f557770a6214a09075de66bc135aaa14c3055bfa6aea75106829910a733e2ef9" host="ip-172-31-18-130" Jan 23 00:08:00.218455 containerd[2014]: 2026-01-23 00:08:00.144 [INFO][5054] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.44.66/26] block=192.168.44.64/26 handle="k8s-pod-network.f557770a6214a09075de66bc135aaa14c3055bfa6aea75106829910a733e2ef9" host="ip-172-31-18-130" Jan 23 00:08:00.218455 containerd[2014]: 2026-01-23 00:08:00.144 [INFO][5054] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.66/26] handle="k8s-pod-network.f557770a6214a09075de66bc135aaa14c3055bfa6aea75106829910a733e2ef9" host="ip-172-31-18-130" Jan 23 00:08:00.218455 containerd[2014]: 2026-01-23 00:08:00.144 [INFO][5054] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 00:08:00.218455 containerd[2014]: 2026-01-23 00:08:00.144 [INFO][5054] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.44.66/26] IPv6=[] ContainerID="f557770a6214a09075de66bc135aaa14c3055bfa6aea75106829910a733e2ef9" HandleID="k8s-pod-network.f557770a6214a09075de66bc135aaa14c3055bfa6aea75106829910a733e2ef9" Workload="ip--172--31--18--130-k8s-coredns--66bc5c9577--bm8wz-eth0" Jan 23 00:08:00.220196 containerd[2014]: 2026-01-23 00:08:00.150 [INFO][5002] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f557770a6214a09075de66bc135aaa14c3055bfa6aea75106829910a733e2ef9" Namespace="kube-system" Pod="coredns-66bc5c9577-bm8wz" WorkloadEndpoint="ip--172--31--18--130-k8s-coredns--66bc5c9577--bm8wz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-coredns--66bc5c9577--bm8wz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"3a923c28-62e4-468e-8af7-41e647711ef9", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"", Pod:"coredns-66bc5c9577-bm8wz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali196c20e7785", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:08:00.220196 containerd[2014]: 2026-01-23 00:08:00.150 [INFO][5002] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.66/32] ContainerID="f557770a6214a09075de66bc135aaa14c3055bfa6aea75106829910a733e2ef9" Namespace="kube-system" Pod="coredns-66bc5c9577-bm8wz" WorkloadEndpoint="ip--172--31--18--130-k8s-coredns--66bc5c9577--bm8wz-eth0" Jan 23 00:08:00.220196 containerd[2014]: 2026-01-23 00:08:00.150 [INFO][5002] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali196c20e7785 ContainerID="f557770a6214a09075de66bc135aaa14c3055bfa6aea75106829910a733e2ef9" Namespace="kube-system" Pod="coredns-66bc5c9577-bm8wz" WorkloadEndpoint="ip--172--31--18--130-k8s-coredns--66bc5c9577--bm8wz-eth0" Jan 23 00:08:00.220196 containerd[2014]: 2026-01-23 00:08:00.164 [INFO][5002] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f557770a6214a09075de66bc135aaa14c3055bfa6aea75106829910a733e2ef9" Namespace="kube-system" Pod="coredns-66bc5c9577-bm8wz" WorkloadEndpoint="ip--172--31--18--130-k8s-coredns--66bc5c9577--bm8wz-eth0" Jan 23 00:08:00.220196 containerd[2014]: 2026-01-23 00:08:00.167 [INFO][5002] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f557770a6214a09075de66bc135aaa14c3055bfa6aea75106829910a733e2ef9" Namespace="kube-system" Pod="coredns-66bc5c9577-bm8wz" WorkloadEndpoint="ip--172--31--18--130-k8s-coredns--66bc5c9577--bm8wz-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-coredns--66bc5c9577--bm8wz-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"3a923c28-62e4-468e-8af7-41e647711ef9", ResourceVersion:"863", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"f557770a6214a09075de66bc135aaa14c3055bfa6aea75106829910a733e2ef9", Pod:"coredns-66bc5c9577-bm8wz", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.66/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali196c20e7785", MAC:"02:6c:06:18:68:25", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:08:00.220196 containerd[2014]: 2026-01-23 00:08:00.191 [INFO][5002] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f557770a6214a09075de66bc135aaa14c3055bfa6aea75106829910a733e2ef9" Namespace="kube-system" Pod="coredns-66bc5c9577-bm8wz" WorkloadEndpoint="ip--172--31--18--130-k8s-coredns--66bc5c9577--bm8wz-eth0" Jan 23 00:08:00.279849 containerd[2014]: time="2026-01-23T00:08:00.279694476Z" level=info msg="connecting to shim f557770a6214a09075de66bc135aaa14c3055bfa6aea75106829910a733e2ef9" address="unix:///run/containerd/s/1ebea3728896c1c50a37c40e55b0e76f79b3cbe66b93baa4b6ec67fb16cd4c92" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:08:00.337537 systemd-networkd[1822]: cali270c8aa9270: Link UP Jan 23 00:08:00.346053 systemd-networkd[1822]: cali270c8aa9270: Gained carrier Jan 23 00:08:00.385966 systemd[1]: Started cri-containerd-f557770a6214a09075de66bc135aaa14c3055bfa6aea75106829910a733e2ef9.scope - libcontainer container f557770a6214a09075de66bc135aaa14c3055bfa6aea75106829910a733e2ef9. Jan 23 00:08:00.421142 containerd[2014]: 2026-01-23 00:07:59.912 [INFO][5009] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--130-k8s-goldmane--7c778bb748--h2nff-eth0 goldmane-7c778bb748- calico-system 6e08ee65-394c-47ae-9b9c-08be18fa8e62 864 0 2026-01-23 00:07:33 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:7c778bb748 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ip-172-31-18-130 goldmane-7c778bb748-h2nff eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] cali270c8aa9270 [] [] }} ContainerID="2034c93f816c09d40a019cd6adb8f980fbf46221a50936bad298f30e10c66fd4" Namespace="calico-system" Pod="goldmane-7c778bb748-h2nff" WorkloadEndpoint="ip--172--31--18--130-k8s-goldmane--7c778bb748--h2nff-" Jan 23 00:08:00.421142 containerd[2014]: 2026-01-23 00:07:59.913 [INFO][5009] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2034c93f816c09d40a019cd6adb8f980fbf46221a50936bad298f30e10c66fd4" Namespace="calico-system" Pod="goldmane-7c778bb748-h2nff" WorkloadEndpoint="ip--172--31--18--130-k8s-goldmane--7c778bb748--h2nff-eth0" Jan 23 00:08:00.421142 containerd[2014]: 2026-01-23 00:08:00.045 [INFO][5068] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2034c93f816c09d40a019cd6adb8f980fbf46221a50936bad298f30e10c66fd4" HandleID="k8s-pod-network.2034c93f816c09d40a019cd6adb8f980fbf46221a50936bad298f30e10c66fd4" Workload="ip--172--31--18--130-k8s-goldmane--7c778bb748--h2nff-eth0" Jan 23 00:08:00.421142 containerd[2014]: 2026-01-23 00:08:00.045 [INFO][5068] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2034c93f816c09d40a019cd6adb8f980fbf46221a50936bad298f30e10c66fd4" HandleID="k8s-pod-network.2034c93f816c09d40a019cd6adb8f980fbf46221a50936bad298f30e10c66fd4" Workload="ip--172--31--18--130-k8s-goldmane--7c778bb748--h2nff-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000327880), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-130", "pod":"goldmane-7c778bb748-h2nff", "timestamp":"2026-01-23 00:08:00.045216681 +0000 UTC"}, Hostname:"ip-172-31-18-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 00:08:00.421142 containerd[2014]: 2026-01-23 00:08:00.048 [INFO][5068] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 00:08:00.421142 containerd[2014]: 2026-01-23 00:08:00.144 [INFO][5068] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 00:08:00.421142 containerd[2014]: 2026-01-23 00:08:00.144 [INFO][5068] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-130' Jan 23 00:08:00.421142 containerd[2014]: 2026-01-23 00:08:00.175 [INFO][5068] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2034c93f816c09d40a019cd6adb8f980fbf46221a50936bad298f30e10c66fd4" host="ip-172-31-18-130" Jan 23 00:08:00.421142 containerd[2014]: 2026-01-23 00:08:00.219 [INFO][5068] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-130" Jan 23 00:08:00.421142 containerd[2014]: 2026-01-23 00:08:00.232 [INFO][5068] ipam/ipam.go 511: Trying affinity for 192.168.44.64/26 host="ip-172-31-18-130" Jan 23 00:08:00.421142 containerd[2014]: 2026-01-23 00:08:00.237 [INFO][5068] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.64/26 host="ip-172-31-18-130" Jan 23 00:08:00.421142 containerd[2014]: 2026-01-23 00:08:00.245 [INFO][5068] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ip-172-31-18-130" Jan 23 00:08:00.421142 containerd[2014]: 2026-01-23 00:08:00.246 [INFO][5068] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.2034c93f816c09d40a019cd6adb8f980fbf46221a50936bad298f30e10c66fd4" host="ip-172-31-18-130" Jan 23 00:08:00.421142 containerd[2014]: 2026-01-23 00:08:00.253 [INFO][5068] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2034c93f816c09d40a019cd6adb8f980fbf46221a50936bad298f30e10c66fd4 Jan 23 00:08:00.421142 containerd[2014]: 2026-01-23 00:08:00.277 [INFO][5068] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.2034c93f816c09d40a019cd6adb8f980fbf46221a50936bad298f30e10c66fd4" host="ip-172-31-18-130" Jan 23 00:08:00.421142 containerd[2014]: 2026-01-23 00:08:00.295 [INFO][5068] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.44.67/26] block=192.168.44.64/26 handle="k8s-pod-network.2034c93f816c09d40a019cd6adb8f980fbf46221a50936bad298f30e10c66fd4" host="ip-172-31-18-130" Jan 23 00:08:00.421142 containerd[2014]: 2026-01-23 00:08:00.296 [INFO][5068] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.67/26] handle="k8s-pod-network.2034c93f816c09d40a019cd6adb8f980fbf46221a50936bad298f30e10c66fd4" host="ip-172-31-18-130" Jan 23 00:08:00.421142 containerd[2014]: 2026-01-23 00:08:00.297 [INFO][5068] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 00:08:00.421142 containerd[2014]: 2026-01-23 00:08:00.298 [INFO][5068] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.44.67/26] IPv6=[] ContainerID="2034c93f816c09d40a019cd6adb8f980fbf46221a50936bad298f30e10c66fd4" HandleID="k8s-pod-network.2034c93f816c09d40a019cd6adb8f980fbf46221a50936bad298f30e10c66fd4" Workload="ip--172--31--18--130-k8s-goldmane--7c778bb748--h2nff-eth0" Jan 23 00:08:00.427235 containerd[2014]: 2026-01-23 00:08:00.323 [INFO][5009] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2034c93f816c09d40a019cd6adb8f980fbf46221a50936bad298f30e10c66fd4" Namespace="calico-system" Pod="goldmane-7c778bb748-h2nff" WorkloadEndpoint="ip--172--31--18--130-k8s-goldmane--7c778bb748--h2nff-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-goldmane--7c778bb748--h2nff-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"6e08ee65-394c-47ae-9b9c-08be18fa8e62", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 7, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"", Pod:"goldmane-7c778bb748-h2nff", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.44.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali270c8aa9270", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:08:00.427235 containerd[2014]: 2026-01-23 00:08:00.323 [INFO][5009] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.67/32] ContainerID="2034c93f816c09d40a019cd6adb8f980fbf46221a50936bad298f30e10c66fd4" Namespace="calico-system" Pod="goldmane-7c778bb748-h2nff" WorkloadEndpoint="ip--172--31--18--130-k8s-goldmane--7c778bb748--h2nff-eth0" Jan 23 00:08:00.427235 containerd[2014]: 2026-01-23 00:08:00.324 [INFO][5009] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali270c8aa9270 ContainerID="2034c93f816c09d40a019cd6adb8f980fbf46221a50936bad298f30e10c66fd4" Namespace="calico-system" Pod="goldmane-7c778bb748-h2nff" WorkloadEndpoint="ip--172--31--18--130-k8s-goldmane--7c778bb748--h2nff-eth0" Jan 23 00:08:00.427235 containerd[2014]: 2026-01-23 00:08:00.354 [INFO][5009] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2034c93f816c09d40a019cd6adb8f980fbf46221a50936bad298f30e10c66fd4" Namespace="calico-system" Pod="goldmane-7c778bb748-h2nff" WorkloadEndpoint="ip--172--31--18--130-k8s-goldmane--7c778bb748--h2nff-eth0" Jan 23 00:08:00.427235 containerd[2014]: 2026-01-23 00:08:00.360 [INFO][5009] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2034c93f816c09d40a019cd6adb8f980fbf46221a50936bad298f30e10c66fd4" Namespace="calico-system" Pod="goldmane-7c778bb748-h2nff" WorkloadEndpoint="ip--172--31--18--130-k8s-goldmane--7c778bb748--h2nff-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-goldmane--7c778bb748--h2nff-eth0", GenerateName:"goldmane-7c778bb748-", Namespace:"calico-system", SelfLink:"", UID:"6e08ee65-394c-47ae-9b9c-08be18fa8e62", ResourceVersion:"864", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 7, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"7c778bb748", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"2034c93f816c09d40a019cd6adb8f980fbf46221a50936bad298f30e10c66fd4", Pod:"goldmane-7c778bb748-h2nff", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.44.67/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"cali270c8aa9270", MAC:"46:68:d7:99:b8:65", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:08:00.427235 containerd[2014]: 2026-01-23 00:08:00.413 [INFO][5009] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2034c93f816c09d40a019cd6adb8f980fbf46221a50936bad298f30e10c66fd4" Namespace="calico-system" Pod="goldmane-7c778bb748-h2nff" WorkloadEndpoint="ip--172--31--18--130-k8s-goldmane--7c778bb748--h2nff-eth0" Jan 23 00:08:00.484254 containerd[2014]: time="2026-01-23T00:08:00.484052998Z" level=info msg="connecting to shim 2034c93f816c09d40a019cd6adb8f980fbf46221a50936bad298f30e10c66fd4" address="unix:///run/containerd/s/c61f43772f5eb5377d1f863d1abf022bc612aba08c7e6751fffdbbfbebecb439" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:08:00.545793 systemd-networkd[1822]: vxlan.calico: Gained IPv6LL Jan 23 00:08:00.586657 systemd-networkd[1822]: cali83b2c16f1c6: Link UP Jan 23 00:08:00.589956 systemd-networkd[1822]: cali83b2c16f1c6: Gained carrier Jan 23 00:08:00.640079 systemd[1]: Started cri-containerd-2034c93f816c09d40a019cd6adb8f980fbf46221a50936bad298f30e10c66fd4.scope - libcontainer container 2034c93f816c09d40a019cd6adb8f980fbf46221a50936bad298f30e10c66fd4. Jan 23 00:08:00.652636 containerd[2014]: time="2026-01-23T00:08:00.642476096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d5rlp,Uid:73991cb4-51f1-4920-a4d2-a782912c4922,Namespace:calico-system,Attempt:0,}" Jan 23 00:08:00.657523 containerd[2014]: 2026-01-23 00:07:59.971 [INFO][5017] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--130-k8s-calico--apiserver--866c48949f--lh2s6-eth0 calico-apiserver-866c48949f- calico-apiserver 9783fed8-ce36-4bde-9a81-2ed0b850cd1e 868 0 2026-01-23 00:07:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:866c48949f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-130 calico-apiserver-866c48949f-lh2s6 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali83b2c16f1c6 [] [] }} ContainerID="246da2214146354749cd872dc813bff23a3519ceb504be9ae33ebd7db6259f86" Namespace="calico-apiserver" Pod="calico-apiserver-866c48949f-lh2s6" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--apiserver--866c48949f--lh2s6-" Jan 23 00:08:00.657523 containerd[2014]: 2026-01-23 00:07:59.973 [INFO][5017] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="246da2214146354749cd872dc813bff23a3519ceb504be9ae33ebd7db6259f86" Namespace="calico-apiserver" Pod="calico-apiserver-866c48949f-lh2s6" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--apiserver--866c48949f--lh2s6-eth0" Jan 23 00:08:00.657523 containerd[2014]: 2026-01-23 00:08:00.129 [INFO][5082] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="246da2214146354749cd872dc813bff23a3519ceb504be9ae33ebd7db6259f86" HandleID="k8s-pod-network.246da2214146354749cd872dc813bff23a3519ceb504be9ae33ebd7db6259f86" Workload="ip--172--31--18--130-k8s-calico--apiserver--866c48949f--lh2s6-eth0" Jan 23 00:08:00.657523 containerd[2014]: 2026-01-23 00:08:00.130 [INFO][5082] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="246da2214146354749cd872dc813bff23a3519ceb504be9ae33ebd7db6259f86" HandleID="k8s-pod-network.246da2214146354749cd872dc813bff23a3519ceb504be9ae33ebd7db6259f86" Workload="ip--172--31--18--130-k8s-calico--apiserver--866c48949f--lh2s6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000121a40), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-130", "pod":"calico-apiserver-866c48949f-lh2s6", "timestamp":"2026-01-23 00:08:00.129315263 +0000 UTC"}, Hostname:"ip-172-31-18-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 00:08:00.657523 containerd[2014]: 2026-01-23 00:08:00.130 [INFO][5082] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 00:08:00.657523 containerd[2014]: 2026-01-23 00:08:00.297 [INFO][5082] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 00:08:00.657523 containerd[2014]: 2026-01-23 00:08:00.298 [INFO][5082] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-130' Jan 23 00:08:00.657523 containerd[2014]: 2026-01-23 00:08:00.360 [INFO][5082] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.246da2214146354749cd872dc813bff23a3519ceb504be9ae33ebd7db6259f86" host="ip-172-31-18-130" Jan 23 00:08:00.657523 containerd[2014]: 2026-01-23 00:08:00.383 [INFO][5082] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-130" Jan 23 00:08:00.657523 containerd[2014]: 2026-01-23 00:08:00.402 [INFO][5082] ipam/ipam.go 511: Trying affinity for 192.168.44.64/26 host="ip-172-31-18-130" Jan 23 00:08:00.657523 containerd[2014]: 2026-01-23 00:08:00.428 [INFO][5082] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.64/26 host="ip-172-31-18-130" Jan 23 00:08:00.657523 containerd[2014]: 2026-01-23 00:08:00.443 [INFO][5082] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ip-172-31-18-130" Jan 23 00:08:00.657523 containerd[2014]: 2026-01-23 00:08:00.444 [INFO][5082] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.246da2214146354749cd872dc813bff23a3519ceb504be9ae33ebd7db6259f86" host="ip-172-31-18-130" Jan 23 00:08:00.657523 containerd[2014]: 2026-01-23 00:08:00.456 [INFO][5082] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.246da2214146354749cd872dc813bff23a3519ceb504be9ae33ebd7db6259f86 Jan 23 00:08:00.657523 containerd[2014]: 2026-01-23 00:08:00.483 [INFO][5082] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.246da2214146354749cd872dc813bff23a3519ceb504be9ae33ebd7db6259f86" host="ip-172-31-18-130" Jan 23 00:08:00.657523 containerd[2014]: 2026-01-23 00:08:00.567 [INFO][5082] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.44.68/26] block=192.168.44.64/26 handle="k8s-pod-network.246da2214146354749cd872dc813bff23a3519ceb504be9ae33ebd7db6259f86" host="ip-172-31-18-130" Jan 23 00:08:00.657523 containerd[2014]: 2026-01-23 00:08:00.567 [INFO][5082] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.68/26] handle="k8s-pod-network.246da2214146354749cd872dc813bff23a3519ceb504be9ae33ebd7db6259f86" host="ip-172-31-18-130" Jan 23 00:08:00.657523 containerd[2014]: 2026-01-23 00:08:00.567 [INFO][5082] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 00:08:00.657523 containerd[2014]: 2026-01-23 00:08:00.568 [INFO][5082] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.44.68/26] IPv6=[] ContainerID="246da2214146354749cd872dc813bff23a3519ceb504be9ae33ebd7db6259f86" HandleID="k8s-pod-network.246da2214146354749cd872dc813bff23a3519ceb504be9ae33ebd7db6259f86" Workload="ip--172--31--18--130-k8s-calico--apiserver--866c48949f--lh2s6-eth0" Jan 23 00:08:00.664940 containerd[2014]: 2026-01-23 00:08:00.575 [INFO][5017] cni-plugin/k8s.go 418: Populated endpoint ContainerID="246da2214146354749cd872dc813bff23a3519ceb504be9ae33ebd7db6259f86" Namespace="calico-apiserver" Pod="calico-apiserver-866c48949f-lh2s6" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--apiserver--866c48949f--lh2s6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-calico--apiserver--866c48949f--lh2s6-eth0", GenerateName:"calico-apiserver-866c48949f-", Namespace:"calico-apiserver", SelfLink:"", UID:"9783fed8-ce36-4bde-9a81-2ed0b850cd1e", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 7, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"866c48949f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"", Pod:"calico-apiserver-866c48949f-lh2s6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali83b2c16f1c6", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:08:00.664940 containerd[2014]: 2026-01-23 00:08:00.575 [INFO][5017] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.68/32] ContainerID="246da2214146354749cd872dc813bff23a3519ceb504be9ae33ebd7db6259f86" Namespace="calico-apiserver" Pod="calico-apiserver-866c48949f-lh2s6" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--apiserver--866c48949f--lh2s6-eth0" Jan 23 00:08:00.664940 containerd[2014]: 2026-01-23 00:08:00.575 [INFO][5017] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali83b2c16f1c6 ContainerID="246da2214146354749cd872dc813bff23a3519ceb504be9ae33ebd7db6259f86" Namespace="calico-apiserver" Pod="calico-apiserver-866c48949f-lh2s6" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--apiserver--866c48949f--lh2s6-eth0" Jan 23 00:08:00.664940 containerd[2014]: 2026-01-23 00:08:00.591 [INFO][5017] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="246da2214146354749cd872dc813bff23a3519ceb504be9ae33ebd7db6259f86" Namespace="calico-apiserver" Pod="calico-apiserver-866c48949f-lh2s6" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--apiserver--866c48949f--lh2s6-eth0" Jan 23 00:08:00.664940 containerd[2014]: 2026-01-23 00:08:00.593 [INFO][5017] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="246da2214146354749cd872dc813bff23a3519ceb504be9ae33ebd7db6259f86" Namespace="calico-apiserver" Pod="calico-apiserver-866c48949f-lh2s6" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--apiserver--866c48949f--lh2s6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-calico--apiserver--866c48949f--lh2s6-eth0", GenerateName:"calico-apiserver-866c48949f-", Namespace:"calico-apiserver", SelfLink:"", UID:"9783fed8-ce36-4bde-9a81-2ed0b850cd1e", ResourceVersion:"868", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 7, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"866c48949f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"246da2214146354749cd872dc813bff23a3519ceb504be9ae33ebd7db6259f86", Pod:"calico-apiserver-866c48949f-lh2s6", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.68/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali83b2c16f1c6", MAC:"8e:70:37:af:83:cd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:08:00.664940 containerd[2014]: 2026-01-23 00:08:00.627 [INFO][5017] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="246da2214146354749cd872dc813bff23a3519ceb504be9ae33ebd7db6259f86" Namespace="calico-apiserver" Pod="calico-apiserver-866c48949f-lh2s6" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--apiserver--866c48949f--lh2s6-eth0" Jan 23 00:08:00.732259 containerd[2014]: time="2026-01-23T00:08:00.732095036Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-bm8wz,Uid:3a923c28-62e4-468e-8af7-41e647711ef9,Namespace:kube-system,Attempt:0,} returns sandbox id \"f557770a6214a09075de66bc135aaa14c3055bfa6aea75106829910a733e2ef9\"" Jan 23 00:08:00.759150 containerd[2014]: time="2026-01-23T00:08:00.757843144Z" level=info msg="CreateContainer within sandbox \"f557770a6214a09075de66bc135aaa14c3055bfa6aea75106829910a733e2ef9\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 00:08:00.826548 containerd[2014]: time="2026-01-23T00:08:00.825906491Z" level=info msg="connecting to shim 246da2214146354749cd872dc813bff23a3519ceb504be9ae33ebd7db6259f86" address="unix:///run/containerd/s/636e0ae48799b47b324e001e384c1b0f13ad92c5ce25b058b401dd808e3e1dd3" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:08:00.902344 containerd[2014]: time="2026-01-23T00:08:00.901463294Z" level=info msg="Container d4467e409d7c80dff66cdf88e392eb7ddd3a430b192d00abe0b452117c67463d: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:08:00.901892 systemd[1]: Started cri-containerd-246da2214146354749cd872dc813bff23a3519ceb504be9ae33ebd7db6259f86.scope - libcontainer container 246da2214146354749cd872dc813bff23a3519ceb504be9ae33ebd7db6259f86. Jan 23 00:08:00.917994 containerd[2014]: time="2026-01-23T00:08:00.917895626Z" level=info msg="CreateContainer within sandbox \"f557770a6214a09075de66bc135aaa14c3055bfa6aea75106829910a733e2ef9\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"d4467e409d7c80dff66cdf88e392eb7ddd3a430b192d00abe0b452117c67463d\"" Jan 23 00:08:00.921990 containerd[2014]: time="2026-01-23T00:08:00.921916626Z" level=info msg="StartContainer for \"d4467e409d7c80dff66cdf88e392eb7ddd3a430b192d00abe0b452117c67463d\"" Jan 23 00:08:00.942452 containerd[2014]: time="2026-01-23T00:08:00.942392615Z" level=info msg="connecting to shim d4467e409d7c80dff66cdf88e392eb7ddd3a430b192d00abe0b452117c67463d" address="unix:///run/containerd/s/1ebea3728896c1c50a37c40e55b0e76f79b3cbe66b93baa4b6ec67fb16cd4c92" protocol=ttrpc version=3 Jan 23 00:08:00.990127 systemd[1]: Started cri-containerd-d4467e409d7c80dff66cdf88e392eb7ddd3a430b192d00abe0b452117c67463d.scope - libcontainer container d4467e409d7c80dff66cdf88e392eb7ddd3a430b192d00abe0b452117c67463d. Jan 23 00:08:01.159650 containerd[2014]: time="2026-01-23T00:08:01.159578912Z" level=info msg="StartContainer for \"d4467e409d7c80dff66cdf88e392eb7ddd3a430b192d00abe0b452117c67463d\" returns successfully" Jan 23 00:08:01.187297 systemd-networkd[1822]: cali196c20e7785: Gained IPv6LL Jan 23 00:08:01.276072 containerd[2014]: time="2026-01-23T00:08:01.275884671Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866c48949f-lh2s6,Uid:9783fed8-ce36-4bde-9a81-2ed0b850cd1e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"246da2214146354749cd872dc813bff23a3519ceb504be9ae33ebd7db6259f86\"" Jan 23 00:08:01.284114 containerd[2014]: time="2026-01-23T00:08:01.283944111Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 00:08:01.319666 containerd[2014]: time="2026-01-23T00:08:01.319482081Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-7c778bb748-h2nff,Uid:6e08ee65-394c-47ae-9b9c-08be18fa8e62,Namespace:calico-system,Attempt:0,} returns sandbox id \"2034c93f816c09d40a019cd6adb8f980fbf46221a50936bad298f30e10c66fd4\"" Jan 23 00:08:01.344255 systemd-networkd[1822]: calibdd2bac91ce: Link UP Jan 23 00:08:01.346685 systemd-networkd[1822]: calibdd2bac91ce: Gained carrier Jan 23 00:08:01.381419 containerd[2014]: 2026-01-23 00:08:01.009 [INFO][5192] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--130-k8s-csi--node--driver--d5rlp-eth0 csi-node-driver- calico-system 73991cb4-51f1-4920-a4d2-a782912c4922 765 0 2026-01-23 00:07:38 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:9d99788f7 k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ip-172-31-18-130 csi-node-driver-d5rlp eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] calibdd2bac91ce [] [] }} ContainerID="635c2d33773b4fc4b4475f53ecbe0761c9b5a8989784c8d6fac4ec107404d901" Namespace="calico-system" Pod="csi-node-driver-d5rlp" WorkloadEndpoint="ip--172--31--18--130-k8s-csi--node--driver--d5rlp-" Jan 23 00:08:01.381419 containerd[2014]: 2026-01-23 00:08:01.011 [INFO][5192] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="635c2d33773b4fc4b4475f53ecbe0761c9b5a8989784c8d6fac4ec107404d901" Namespace="calico-system" Pod="csi-node-driver-d5rlp" WorkloadEndpoint="ip--172--31--18--130-k8s-csi--node--driver--d5rlp-eth0" Jan 23 00:08:01.381419 containerd[2014]: 2026-01-23 00:08:01.168 [INFO][5272] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="635c2d33773b4fc4b4475f53ecbe0761c9b5a8989784c8d6fac4ec107404d901" HandleID="k8s-pod-network.635c2d33773b4fc4b4475f53ecbe0761c9b5a8989784c8d6fac4ec107404d901" Workload="ip--172--31--18--130-k8s-csi--node--driver--d5rlp-eth0" Jan 23 00:08:01.381419 containerd[2014]: 2026-01-23 00:08:01.172 [INFO][5272] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="635c2d33773b4fc4b4475f53ecbe0761c9b5a8989784c8d6fac4ec107404d901" HandleID="k8s-pod-network.635c2d33773b4fc4b4475f53ecbe0761c9b5a8989784c8d6fac4ec107404d901" Workload="ip--172--31--18--130-k8s-csi--node--driver--d5rlp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000331690), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-130", "pod":"csi-node-driver-d5rlp", "timestamp":"2026-01-23 00:08:01.168741381 +0000 UTC"}, Hostname:"ip-172-31-18-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 00:08:01.381419 containerd[2014]: 2026-01-23 00:08:01.172 [INFO][5272] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 00:08:01.381419 containerd[2014]: 2026-01-23 00:08:01.173 [INFO][5272] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 00:08:01.381419 containerd[2014]: 2026-01-23 00:08:01.173 [INFO][5272] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-130' Jan 23 00:08:01.381419 containerd[2014]: 2026-01-23 00:08:01.218 [INFO][5272] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.635c2d33773b4fc4b4475f53ecbe0761c9b5a8989784c8d6fac4ec107404d901" host="ip-172-31-18-130" Jan 23 00:08:01.381419 containerd[2014]: 2026-01-23 00:08:01.236 [INFO][5272] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-130" Jan 23 00:08:01.381419 containerd[2014]: 2026-01-23 00:08:01.263 [INFO][5272] ipam/ipam.go 511: Trying affinity for 192.168.44.64/26 host="ip-172-31-18-130" Jan 23 00:08:01.381419 containerd[2014]: 2026-01-23 00:08:01.270 [INFO][5272] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.64/26 host="ip-172-31-18-130" Jan 23 00:08:01.381419 containerd[2014]: 2026-01-23 00:08:01.284 [INFO][5272] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ip-172-31-18-130" Jan 23 00:08:01.381419 containerd[2014]: 2026-01-23 00:08:01.286 [INFO][5272] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.635c2d33773b4fc4b4475f53ecbe0761c9b5a8989784c8d6fac4ec107404d901" host="ip-172-31-18-130" Jan 23 00:08:01.381419 containerd[2014]: 2026-01-23 00:08:01.294 [INFO][5272] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.635c2d33773b4fc4b4475f53ecbe0761c9b5a8989784c8d6fac4ec107404d901 Jan 23 00:08:01.381419 containerd[2014]: 2026-01-23 00:08:01.308 [INFO][5272] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.635c2d33773b4fc4b4475f53ecbe0761c9b5a8989784c8d6fac4ec107404d901" host="ip-172-31-18-130" Jan 23 00:08:01.381419 containerd[2014]: 2026-01-23 00:08:01.328 [INFO][5272] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.44.69/26] block=192.168.44.64/26 handle="k8s-pod-network.635c2d33773b4fc4b4475f53ecbe0761c9b5a8989784c8d6fac4ec107404d901" host="ip-172-31-18-130" Jan 23 00:08:01.381419 containerd[2014]: 2026-01-23 00:08:01.329 [INFO][5272] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.69/26] handle="k8s-pod-network.635c2d33773b4fc4b4475f53ecbe0761c9b5a8989784c8d6fac4ec107404d901" host="ip-172-31-18-130" Jan 23 00:08:01.381419 containerd[2014]: 2026-01-23 00:08:01.329 [INFO][5272] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 00:08:01.381419 containerd[2014]: 2026-01-23 00:08:01.329 [INFO][5272] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.44.69/26] IPv6=[] ContainerID="635c2d33773b4fc4b4475f53ecbe0761c9b5a8989784c8d6fac4ec107404d901" HandleID="k8s-pod-network.635c2d33773b4fc4b4475f53ecbe0761c9b5a8989784c8d6fac4ec107404d901" Workload="ip--172--31--18--130-k8s-csi--node--driver--d5rlp-eth0" Jan 23 00:08:01.383269 containerd[2014]: 2026-01-23 00:08:01.337 [INFO][5192] cni-plugin/k8s.go 418: Populated endpoint ContainerID="635c2d33773b4fc4b4475f53ecbe0761c9b5a8989784c8d6fac4ec107404d901" Namespace="calico-system" Pod="csi-node-driver-d5rlp" WorkloadEndpoint="ip--172--31--18--130-k8s-csi--node--driver--d5rlp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-csi--node--driver--d5rlp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"73991cb4-51f1-4920-a4d2-a782912c4922", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 7, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"", Pod:"csi-node-driver-d5rlp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.44.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibdd2bac91ce", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:08:01.383269 containerd[2014]: 2026-01-23 00:08:01.338 [INFO][5192] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.69/32] ContainerID="635c2d33773b4fc4b4475f53ecbe0761c9b5a8989784c8d6fac4ec107404d901" Namespace="calico-system" Pod="csi-node-driver-d5rlp" WorkloadEndpoint="ip--172--31--18--130-k8s-csi--node--driver--d5rlp-eth0" Jan 23 00:08:01.383269 containerd[2014]: 2026-01-23 00:08:01.338 [INFO][5192] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calibdd2bac91ce ContainerID="635c2d33773b4fc4b4475f53ecbe0761c9b5a8989784c8d6fac4ec107404d901" Namespace="calico-system" Pod="csi-node-driver-d5rlp" WorkloadEndpoint="ip--172--31--18--130-k8s-csi--node--driver--d5rlp-eth0" Jan 23 00:08:01.383269 containerd[2014]: 2026-01-23 00:08:01.347 [INFO][5192] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="635c2d33773b4fc4b4475f53ecbe0761c9b5a8989784c8d6fac4ec107404d901" Namespace="calico-system" Pod="csi-node-driver-d5rlp" WorkloadEndpoint="ip--172--31--18--130-k8s-csi--node--driver--d5rlp-eth0" Jan 23 00:08:01.383269 containerd[2014]: 2026-01-23 00:08:01.348 [INFO][5192] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="635c2d33773b4fc4b4475f53ecbe0761c9b5a8989784c8d6fac4ec107404d901" Namespace="calico-system" Pod="csi-node-driver-d5rlp" WorkloadEndpoint="ip--172--31--18--130-k8s-csi--node--driver--d5rlp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-csi--node--driver--d5rlp-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"73991cb4-51f1-4920-a4d2-a782912c4922", ResourceVersion:"765", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 7, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"9d99788f7", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"635c2d33773b4fc4b4475f53ecbe0761c9b5a8989784c8d6fac4ec107404d901", Pod:"csi-node-driver-d5rlp", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.44.69/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"calibdd2bac91ce", MAC:"b2:51:6a:c2:c2:59", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:08:01.383269 containerd[2014]: 2026-01-23 00:08:01.371 [INFO][5192] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="635c2d33773b4fc4b4475f53ecbe0761c9b5a8989784c8d6fac4ec107404d901" Namespace="calico-system" Pod="csi-node-driver-d5rlp" WorkloadEndpoint="ip--172--31--18--130-k8s-csi--node--driver--d5rlp-eth0" Jan 23 00:08:01.421530 containerd[2014]: time="2026-01-23T00:08:01.421378740Z" level=info msg="connecting to shim 635c2d33773b4fc4b4475f53ecbe0761c9b5a8989784c8d6fac4ec107404d901" address="unix:///run/containerd/s/02bcf2ca26272b4f727443d5a26481a5227b762b372381d2200d2cc40ab01486" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:08:01.481893 systemd[1]: Started cri-containerd-635c2d33773b4fc4b4475f53ecbe0761c9b5a8989784c8d6fac4ec107404d901.scope - libcontainer container 635c2d33773b4fc4b4475f53ecbe0761c9b5a8989784c8d6fac4ec107404d901. Jan 23 00:08:01.559152 containerd[2014]: time="2026-01-23T00:08:01.559100001Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:08:01.561170 containerd[2014]: time="2026-01-23T00:08:01.561008726Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 00:08:01.561170 containerd[2014]: time="2026-01-23T00:08:01.561079143Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 00:08:01.561436 kubelet[3616]: E0123 00:08:01.561392 3616 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:08:01.562028 kubelet[3616]: E0123 00:08:01.561459 3616 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:08:01.563620 kubelet[3616]: E0123 00:08:01.562348 3616 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-866c48949f-lh2s6_calico-apiserver(9783fed8-ce36-4bde-9a81-2ed0b850cd1e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 00:08:01.563620 kubelet[3616]: E0123 00:08:01.562462 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866c48949f-lh2s6" podUID="9783fed8-ce36-4bde-9a81-2ed0b850cd1e" Jan 23 00:08:01.563898 containerd[2014]: time="2026-01-23T00:08:01.562788552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 00:08:01.643543 containerd[2014]: time="2026-01-23T00:08:01.642151963Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-d5rlp,Uid:73991cb4-51f1-4920-a4d2-a782912c4922,Namespace:calico-system,Attempt:0,} returns sandbox id \"635c2d33773b4fc4b4475f53ecbe0761c9b5a8989784c8d6fac4ec107404d901\"" Jan 23 00:08:01.644141 containerd[2014]: time="2026-01-23T00:08:01.643887758Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6jc79,Uid:383114fa-f156-4be9-85ad-d3c3beab9901,Namespace:kube-system,Attempt:0,}" Jan 23 00:08:01.677870 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount460819202.mount: Deactivated successfully. Jan 23 00:08:01.896544 containerd[2014]: time="2026-01-23T00:08:01.895360116Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:08:01.899264 containerd[2014]: time="2026-01-23T00:08:01.897412241Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 00:08:01.899264 containerd[2014]: time="2026-01-23T00:08:01.897468601Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 00:08:01.899773 kubelet[3616]: E0123 00:08:01.899699 3616 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 00:08:01.899889 kubelet[3616]: E0123 00:08:01.899773 3616 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 00:08:01.900075 kubelet[3616]: E0123 00:08:01.900015 3616 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-h2nff_calico-system(6e08ee65-394c-47ae-9b9c-08be18fa8e62): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 00:08:01.900201 kubelet[3616]: E0123 00:08:01.900088 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h2nff" podUID="6e08ee65-394c-47ae-9b9c-08be18fa8e62" Jan 23 00:08:01.907012 systemd[1]: Started sshd@7-172.31.18.130:22-4.153.228.146:58408.service - OpenSSH per-connection server daemon (4.153.228.146:58408). Jan 23 00:08:01.909283 containerd[2014]: time="2026-01-23T00:08:01.909129168Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 00:08:02.109479 kubelet[3616]: E0123 00:08:02.109393 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866c48949f-lh2s6" podUID="9783fed8-ce36-4bde-9a81-2ed0b850cd1e" Jan 23 00:08:02.129707 kubelet[3616]: E0123 00:08:02.129453 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h2nff" podUID="6e08ee65-394c-47ae-9b9c-08be18fa8e62" Jan 23 00:08:02.146257 systemd-networkd[1822]: cali83b2c16f1c6: Gained IPv6LL Jan 23 00:08:02.185773 systemd-networkd[1822]: cali3886c81d861: Link UP Jan 23 00:08:02.190769 systemd-networkd[1822]: cali3886c81d861: Gained carrier Jan 23 00:08:02.215403 containerd[2014]: time="2026-01-23T00:08:02.214192781Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:08:02.217597 containerd[2014]: time="2026-01-23T00:08:02.216274267Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 00:08:02.217597 containerd[2014]: time="2026-01-23T00:08:02.216539107Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 00:08:02.224624 kubelet[3616]: E0123 00:08:02.217893 3616 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 00:08:02.224624 kubelet[3616]: E0123 00:08:02.217984 3616 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 00:08:02.224624 kubelet[3616]: E0123 00:08:02.218320 3616 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-d5rlp_calico-system(73991cb4-51f1-4920-a4d2-a782912c4922): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 00:08:02.224888 containerd[2014]: time="2026-01-23T00:08:02.224753390Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 00:08:02.272298 containerd[2014]: 2026-01-23 00:08:01.783 [INFO][5360] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--130-k8s-coredns--66bc5c9577--6jc79-eth0 coredns-66bc5c9577- kube-system 383114fa-f156-4be9-85ad-d3c3beab9901 862 0 2026-01-23 00:07:10 +0000 UTC map[k8s-app:kube-dns pod-template-hash:66bc5c9577 projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ip-172-31-18-130 coredns-66bc5c9577-6jc79 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali3886c81d861 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 } {liveness-probe TCP 8080 0 } {readiness-probe TCP 8181 0 }] [] }} ContainerID="d52957eadb478d8d9cdc5667c55e9608f9fad67385c71a2b0a3b9b63e2608425" Namespace="kube-system" Pod="coredns-66bc5c9577-6jc79" WorkloadEndpoint="ip--172--31--18--130-k8s-coredns--66bc5c9577--6jc79-" Jan 23 00:08:02.272298 containerd[2014]: 2026-01-23 00:08:01.783 [INFO][5360] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="d52957eadb478d8d9cdc5667c55e9608f9fad67385c71a2b0a3b9b63e2608425" Namespace="kube-system" Pod="coredns-66bc5c9577-6jc79" WorkloadEndpoint="ip--172--31--18--130-k8s-coredns--66bc5c9577--6jc79-eth0" Jan 23 00:08:02.272298 containerd[2014]: 2026-01-23 00:08:01.897 [INFO][5372] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="d52957eadb478d8d9cdc5667c55e9608f9fad67385c71a2b0a3b9b63e2608425" HandleID="k8s-pod-network.d52957eadb478d8d9cdc5667c55e9608f9fad67385c71a2b0a3b9b63e2608425" Workload="ip--172--31--18--130-k8s-coredns--66bc5c9577--6jc79-eth0" Jan 23 00:08:02.272298 containerd[2014]: 2026-01-23 00:08:01.898 [INFO][5372] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="d52957eadb478d8d9cdc5667c55e9608f9fad67385c71a2b0a3b9b63e2608425" HandleID="k8s-pod-network.d52957eadb478d8d9cdc5667c55e9608f9fad67385c71a2b0a3b9b63e2608425" Workload="ip--172--31--18--130-k8s-coredns--66bc5c9577--6jc79-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000345210), Attrs:map[string]string{"namespace":"kube-system", "node":"ip-172-31-18-130", "pod":"coredns-66bc5c9577-6jc79", "timestamp":"2026-01-23 00:08:01.897891078 +0000 UTC"}, Hostname:"ip-172-31-18-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 00:08:02.272298 containerd[2014]: 2026-01-23 00:08:01.898 [INFO][5372] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 00:08:02.272298 containerd[2014]: 2026-01-23 00:08:01.898 [INFO][5372] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 00:08:02.272298 containerd[2014]: 2026-01-23 00:08:01.898 [INFO][5372] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-130' Jan 23 00:08:02.272298 containerd[2014]: 2026-01-23 00:08:02.003 [INFO][5372] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.d52957eadb478d8d9cdc5667c55e9608f9fad67385c71a2b0a3b9b63e2608425" host="ip-172-31-18-130" Jan 23 00:08:02.272298 containerd[2014]: 2026-01-23 00:08:02.033 [INFO][5372] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-130" Jan 23 00:08:02.272298 containerd[2014]: 2026-01-23 00:08:02.049 [INFO][5372] ipam/ipam.go 511: Trying affinity for 192.168.44.64/26 host="ip-172-31-18-130" Jan 23 00:08:02.272298 containerd[2014]: 2026-01-23 00:08:02.059 [INFO][5372] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.64/26 host="ip-172-31-18-130" Jan 23 00:08:02.272298 containerd[2014]: 2026-01-23 00:08:02.071 [INFO][5372] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ip-172-31-18-130" Jan 23 00:08:02.272298 containerd[2014]: 2026-01-23 00:08:02.071 [INFO][5372] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.d52957eadb478d8d9cdc5667c55e9608f9fad67385c71a2b0a3b9b63e2608425" host="ip-172-31-18-130" Jan 23 00:08:02.272298 containerd[2014]: 2026-01-23 00:08:02.079 [INFO][5372] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.d52957eadb478d8d9cdc5667c55e9608f9fad67385c71a2b0a3b9b63e2608425 Jan 23 00:08:02.272298 containerd[2014]: 2026-01-23 00:08:02.106 [INFO][5372] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.d52957eadb478d8d9cdc5667c55e9608f9fad67385c71a2b0a3b9b63e2608425" host="ip-172-31-18-130" Jan 23 00:08:02.272298 containerd[2014]: 2026-01-23 00:08:02.145 [INFO][5372] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.44.70/26] block=192.168.44.64/26 handle="k8s-pod-network.d52957eadb478d8d9cdc5667c55e9608f9fad67385c71a2b0a3b9b63e2608425" host="ip-172-31-18-130" Jan 23 00:08:02.272298 containerd[2014]: 2026-01-23 00:08:02.146 [INFO][5372] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.70/26] handle="k8s-pod-network.d52957eadb478d8d9cdc5667c55e9608f9fad67385c71a2b0a3b9b63e2608425" host="ip-172-31-18-130" Jan 23 00:08:02.272298 containerd[2014]: 2026-01-23 00:08:02.149 [INFO][5372] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 00:08:02.272298 containerd[2014]: 2026-01-23 00:08:02.151 [INFO][5372] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.44.70/26] IPv6=[] ContainerID="d52957eadb478d8d9cdc5667c55e9608f9fad67385c71a2b0a3b9b63e2608425" HandleID="k8s-pod-network.d52957eadb478d8d9cdc5667c55e9608f9fad67385c71a2b0a3b9b63e2608425" Workload="ip--172--31--18--130-k8s-coredns--66bc5c9577--6jc79-eth0" Jan 23 00:08:02.276268 containerd[2014]: 2026-01-23 00:08:02.164 [INFO][5360] cni-plugin/k8s.go 418: Populated endpoint ContainerID="d52957eadb478d8d9cdc5667c55e9608f9fad67385c71a2b0a3b9b63e2608425" Namespace="kube-system" Pod="coredns-66bc5c9577-6jc79" WorkloadEndpoint="ip--172--31--18--130-k8s-coredns--66bc5c9577--6jc79-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-coredns--66bc5c9577--6jc79-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"383114fa-f156-4be9-85ad-d3c3beab9901", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"", Pod:"coredns-66bc5c9577-6jc79", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3886c81d861", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:08:02.276268 containerd[2014]: 2026-01-23 00:08:02.166 [INFO][5360] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.70/32] ContainerID="d52957eadb478d8d9cdc5667c55e9608f9fad67385c71a2b0a3b9b63e2608425" Namespace="kube-system" Pod="coredns-66bc5c9577-6jc79" WorkloadEndpoint="ip--172--31--18--130-k8s-coredns--66bc5c9577--6jc79-eth0" Jan 23 00:08:02.276268 containerd[2014]: 2026-01-23 00:08:02.166 [INFO][5360] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3886c81d861 ContainerID="d52957eadb478d8d9cdc5667c55e9608f9fad67385c71a2b0a3b9b63e2608425" Namespace="kube-system" Pod="coredns-66bc5c9577-6jc79" WorkloadEndpoint="ip--172--31--18--130-k8s-coredns--66bc5c9577--6jc79-eth0" Jan 23 00:08:02.276268 containerd[2014]: 2026-01-23 00:08:02.196 [INFO][5360] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="d52957eadb478d8d9cdc5667c55e9608f9fad67385c71a2b0a3b9b63e2608425" Namespace="kube-system" Pod="coredns-66bc5c9577-6jc79" WorkloadEndpoint="ip--172--31--18--130-k8s-coredns--66bc5c9577--6jc79-eth0" Jan 23 00:08:02.276268 containerd[2014]: 2026-01-23 00:08:02.214 [INFO][5360] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="d52957eadb478d8d9cdc5667c55e9608f9fad67385c71a2b0a3b9b63e2608425" Namespace="kube-system" Pod="coredns-66bc5c9577-6jc79" WorkloadEndpoint="ip--172--31--18--130-k8s-coredns--66bc5c9577--6jc79-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-coredns--66bc5c9577--6jc79-eth0", GenerateName:"coredns-66bc5c9577-", Namespace:"kube-system", SelfLink:"", UID:"383114fa-f156-4be9-85ad-d3c3beab9901", ResourceVersion:"862", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 7, 10, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"66bc5c9577", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"d52957eadb478d8d9cdc5667c55e9608f9fad67385c71a2b0a3b9b63e2608425", Pod:"coredns-66bc5c9577-6jc79", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.44.70/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali3886c81d861", MAC:"16:ca:4b:04:3d:2f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"liveness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1f90, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"readiness-probe", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x1ff5, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:08:02.276268 containerd[2014]: 2026-01-23 00:08:02.261 [INFO][5360] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="d52957eadb478d8d9cdc5667c55e9608f9fad67385c71a2b0a3b9b63e2608425" Namespace="kube-system" Pod="coredns-66bc5c9577-6jc79" WorkloadEndpoint="ip--172--31--18--130-k8s-coredns--66bc5c9577--6jc79-eth0" Jan 23 00:08:02.273854 systemd-networkd[1822]: cali270c8aa9270: Gained IPv6LL Jan 23 00:08:02.298920 kubelet[3616]: I0123 00:08:02.298770 3616 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-bm8wz" podStartSLOduration=52.298741905 podStartE2EDuration="52.298741905s" podCreationTimestamp="2026-01-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:08:02.2959545 +0000 UTC m=+54.922191807" watchObservedRunningTime="2026-01-23 00:08:02.298741905 +0000 UTC m=+54.924979200" Jan 23 00:08:02.331173 containerd[2014]: time="2026-01-23T00:08:02.331096260Z" level=info msg="connecting to shim d52957eadb478d8d9cdc5667c55e9608f9fad67385c71a2b0a3b9b63e2608425" address="unix:///run/containerd/s/832143abbf1ee7260a7ffbbd969493a6f92a10947766d1acf4e598d761eefe1d" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:08:02.418853 systemd[1]: Started cri-containerd-d52957eadb478d8d9cdc5667c55e9608f9fad67385c71a2b0a3b9b63e2608425.scope - libcontainer container d52957eadb478d8d9cdc5667c55e9608f9fad67385c71a2b0a3b9b63e2608425. Jan 23 00:08:02.518045 containerd[2014]: time="2026-01-23T00:08:02.517979092Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:08:02.519456 containerd[2014]: time="2026-01-23T00:08:02.519232465Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 00:08:02.519456 containerd[2014]: time="2026-01-23T00:08:02.519368405Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 00:08:02.520043 kubelet[3616]: E0123 00:08:02.519878 3616 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 00:08:02.520636 kubelet[3616]: E0123 00:08:02.520543 3616 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 00:08:02.521262 kubelet[3616]: E0123 00:08:02.520771 3616 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-d5rlp_calico-system(73991cb4-51f1-4920-a4d2-a782912c4922): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 00:08:02.521262 kubelet[3616]: E0123 00:08:02.520918 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-d5rlp" podUID="73991cb4-51f1-4920-a4d2-a782912c4922" Jan 23 00:08:02.526719 sshd[5378]: Accepted publickey for core from 4.153.228.146 port 58408 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:08:02.530326 sshd-session[5378]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:08:02.550990 systemd-logind[1983]: New session 8 of user core. Jan 23 00:08:02.555887 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 23 00:08:02.628648 containerd[2014]: time="2026-01-23T00:08:02.625487840Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-66bc5c9577-6jc79,Uid:383114fa-f156-4be9-85ad-d3c3beab9901,Namespace:kube-system,Attempt:0,} returns sandbox id \"d52957eadb478d8d9cdc5667c55e9608f9fad67385c71a2b0a3b9b63e2608425\"" Jan 23 00:08:02.639324 containerd[2014]: time="2026-01-23T00:08:02.639073096Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f7974d7c8-hppng,Uid:fb41eab3-a03e-4b48-bc83-fecd2d987e90,Namespace:calico-system,Attempt:0,}" Jan 23 00:08:02.658589 systemd-networkd[1822]: calibdd2bac91ce: Gained IPv6LL Jan 23 00:08:02.663002 containerd[2014]: time="2026-01-23T00:08:02.662879435Z" level=info msg="CreateContainer within sandbox \"d52957eadb478d8d9cdc5667c55e9608f9fad67385c71a2b0a3b9b63e2608425\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 23 00:08:02.717968 containerd[2014]: time="2026-01-23T00:08:02.717813142Z" level=info msg="Container 131d6a60c01990cbeafff81fccc94490eb0cae3472a3f6b8baadd369d4e56785: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:08:02.723124 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1401161130.mount: Deactivated successfully. Jan 23 00:08:02.750448 containerd[2014]: time="2026-01-23T00:08:02.750335437Z" level=info msg="CreateContainer within sandbox \"d52957eadb478d8d9cdc5667c55e9608f9fad67385c71a2b0a3b9b63e2608425\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"131d6a60c01990cbeafff81fccc94490eb0cae3472a3f6b8baadd369d4e56785\"" Jan 23 00:08:02.756103 containerd[2014]: time="2026-01-23T00:08:02.754354458Z" level=info msg="StartContainer for \"131d6a60c01990cbeafff81fccc94490eb0cae3472a3f6b8baadd369d4e56785\"" Jan 23 00:08:02.760407 containerd[2014]: time="2026-01-23T00:08:02.760175674Z" level=info msg="connecting to shim 131d6a60c01990cbeafff81fccc94490eb0cae3472a3f6b8baadd369d4e56785" address="unix:///run/containerd/s/832143abbf1ee7260a7ffbbd969493a6f92a10947766d1acf4e598d761eefe1d" protocol=ttrpc version=3 Jan 23 00:08:02.847912 systemd[1]: Started cri-containerd-131d6a60c01990cbeafff81fccc94490eb0cae3472a3f6b8baadd369d4e56785.scope - libcontainer container 131d6a60c01990cbeafff81fccc94490eb0cae3472a3f6b8baadd369d4e56785. Jan 23 00:08:03.061553 containerd[2014]: time="2026-01-23T00:08:03.061345591Z" level=info msg="StartContainer for \"131d6a60c01990cbeafff81fccc94490eb0cae3472a3f6b8baadd369d4e56785\" returns successfully" Jan 23 00:08:03.150609 kubelet[3616]: E0123 00:08:03.150480 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-d5rlp" podUID="73991cb4-51f1-4920-a4d2-a782912c4922" Jan 23 00:08:03.154176 kubelet[3616]: E0123 00:08:03.154066 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866c48949f-lh2s6" podUID="9783fed8-ce36-4bde-9a81-2ed0b850cd1e" Jan 23 00:08:03.154403 kubelet[3616]: E0123 00:08:03.154253 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h2nff" podUID="6e08ee65-394c-47ae-9b9c-08be18fa8e62" Jan 23 00:08:03.245368 kubelet[3616]: I0123 00:08:03.245264 3616 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-66bc5c9577-6jc79" podStartSLOduration=53.24524031 podStartE2EDuration="53.24524031s" podCreationTimestamp="2026-01-23 00:07:10 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 00:08:03.240967388 +0000 UTC m=+55.867204707" watchObservedRunningTime="2026-01-23 00:08:03.24524031 +0000 UTC m=+55.871477605" Jan 23 00:08:03.349067 sshd[5440]: Connection closed by 4.153.228.146 port 58408 Jan 23 00:08:03.355073 sshd-session[5378]: pam_unix(sshd:session): session closed for user core Jan 23 00:08:03.356663 systemd-networkd[1822]: cali13a6c7360fe: Link UP Jan 23 00:08:03.363059 systemd-networkd[1822]: cali13a6c7360fe: Gained carrier Jan 23 00:08:03.377818 systemd[1]: sshd@7-172.31.18.130:22-4.153.228.146:58408.service: Deactivated successfully. Jan 23 00:08:03.386190 systemd[1]: session-8.scope: Deactivated successfully. Jan 23 00:08:03.396004 systemd-logind[1983]: Session 8 logged out. Waiting for processes to exit. Jan 23 00:08:03.402461 systemd-logind[1983]: Removed session 8. Jan 23 00:08:03.425979 containerd[2014]: 2026-01-23 00:08:02.900 [INFO][5448] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--130-k8s-calico--kube--controllers--6f7974d7c8--hppng-eth0 calico-kube-controllers-6f7974d7c8- calico-system fb41eab3-a03e-4b48-bc83-fecd2d987e90 869 0 2026-01-23 00:07:38 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:6f7974d7c8 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ip-172-31-18-130 calico-kube-controllers-6f7974d7c8-hppng eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali13a6c7360fe [] [] }} ContainerID="8eba487f9832ef9d50b572f53080610bb7563087f143a70bf007aacff22a1347" Namespace="calico-system" Pod="calico-kube-controllers-6f7974d7c8-hppng" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--kube--controllers--6f7974d7c8--hppng-" Jan 23 00:08:03.425979 containerd[2014]: 2026-01-23 00:08:02.900 [INFO][5448] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="8eba487f9832ef9d50b572f53080610bb7563087f143a70bf007aacff22a1347" Namespace="calico-system" Pod="calico-kube-controllers-6f7974d7c8-hppng" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--kube--controllers--6f7974d7c8--hppng-eth0" Jan 23 00:08:03.425979 containerd[2014]: 2026-01-23 00:08:03.103 [INFO][5489] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="8eba487f9832ef9d50b572f53080610bb7563087f143a70bf007aacff22a1347" HandleID="k8s-pod-network.8eba487f9832ef9d50b572f53080610bb7563087f143a70bf007aacff22a1347" Workload="ip--172--31--18--130-k8s-calico--kube--controllers--6f7974d7c8--hppng-eth0" Jan 23 00:08:03.425979 containerd[2014]: 2026-01-23 00:08:03.106 [INFO][5489] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="8eba487f9832ef9d50b572f53080610bb7563087f143a70bf007aacff22a1347" HandleID="k8s-pod-network.8eba487f9832ef9d50b572f53080610bb7563087f143a70bf007aacff22a1347" Workload="ip--172--31--18--130-k8s-calico--kube--controllers--6f7974d7c8--hppng-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000378550), Attrs:map[string]string{"namespace":"calico-system", "node":"ip-172-31-18-130", "pod":"calico-kube-controllers-6f7974d7c8-hppng", "timestamp":"2026-01-23 00:08:03.103729652 +0000 UTC"}, Hostname:"ip-172-31-18-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 00:08:03.425979 containerd[2014]: 2026-01-23 00:08:03.106 [INFO][5489] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 00:08:03.425979 containerd[2014]: 2026-01-23 00:08:03.106 [INFO][5489] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 00:08:03.425979 containerd[2014]: 2026-01-23 00:08:03.106 [INFO][5489] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-130' Jan 23 00:08:03.425979 containerd[2014]: 2026-01-23 00:08:03.181 [INFO][5489] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.8eba487f9832ef9d50b572f53080610bb7563087f143a70bf007aacff22a1347" host="ip-172-31-18-130" Jan 23 00:08:03.425979 containerd[2014]: 2026-01-23 00:08:03.208 [INFO][5489] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-130" Jan 23 00:08:03.425979 containerd[2014]: 2026-01-23 00:08:03.267 [INFO][5489] ipam/ipam.go 511: Trying affinity for 192.168.44.64/26 host="ip-172-31-18-130" Jan 23 00:08:03.425979 containerd[2014]: 2026-01-23 00:08:03.272 [INFO][5489] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.64/26 host="ip-172-31-18-130" Jan 23 00:08:03.425979 containerd[2014]: 2026-01-23 00:08:03.279 [INFO][5489] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ip-172-31-18-130" Jan 23 00:08:03.425979 containerd[2014]: 2026-01-23 00:08:03.280 [INFO][5489] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.8eba487f9832ef9d50b572f53080610bb7563087f143a70bf007aacff22a1347" host="ip-172-31-18-130" Jan 23 00:08:03.425979 containerd[2014]: 2026-01-23 00:08:03.285 [INFO][5489] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.8eba487f9832ef9d50b572f53080610bb7563087f143a70bf007aacff22a1347 Jan 23 00:08:03.425979 containerd[2014]: 2026-01-23 00:08:03.307 [INFO][5489] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.8eba487f9832ef9d50b572f53080610bb7563087f143a70bf007aacff22a1347" host="ip-172-31-18-130" Jan 23 00:08:03.425979 containerd[2014]: 2026-01-23 00:08:03.327 [INFO][5489] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.44.71/26] block=192.168.44.64/26 handle="k8s-pod-network.8eba487f9832ef9d50b572f53080610bb7563087f143a70bf007aacff22a1347" host="ip-172-31-18-130" Jan 23 00:08:03.425979 containerd[2014]: 2026-01-23 00:08:03.327 [INFO][5489] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.71/26] handle="k8s-pod-network.8eba487f9832ef9d50b572f53080610bb7563087f143a70bf007aacff22a1347" host="ip-172-31-18-130" Jan 23 00:08:03.425979 containerd[2014]: 2026-01-23 00:08:03.328 [INFO][5489] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 00:08:03.425979 containerd[2014]: 2026-01-23 00:08:03.328 [INFO][5489] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.44.71/26] IPv6=[] ContainerID="8eba487f9832ef9d50b572f53080610bb7563087f143a70bf007aacff22a1347" HandleID="k8s-pod-network.8eba487f9832ef9d50b572f53080610bb7563087f143a70bf007aacff22a1347" Workload="ip--172--31--18--130-k8s-calico--kube--controllers--6f7974d7c8--hppng-eth0" Jan 23 00:08:03.434288 containerd[2014]: 2026-01-23 00:08:03.336 [INFO][5448] cni-plugin/k8s.go 418: Populated endpoint ContainerID="8eba487f9832ef9d50b572f53080610bb7563087f143a70bf007aacff22a1347" Namespace="calico-system" Pod="calico-kube-controllers-6f7974d7c8-hppng" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--kube--controllers--6f7974d7c8--hppng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-calico--kube--controllers--6f7974d7c8--hppng-eth0", GenerateName:"calico-kube-controllers-6f7974d7c8-", Namespace:"calico-system", SelfLink:"", UID:"fb41eab3-a03e-4b48-bc83-fecd2d987e90", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 7, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f7974d7c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"", Pod:"calico-kube-controllers-6f7974d7c8-hppng", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali13a6c7360fe", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:08:03.434288 containerd[2014]: 2026-01-23 00:08:03.337 [INFO][5448] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.71/32] ContainerID="8eba487f9832ef9d50b572f53080610bb7563087f143a70bf007aacff22a1347" Namespace="calico-system" Pod="calico-kube-controllers-6f7974d7c8-hppng" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--kube--controllers--6f7974d7c8--hppng-eth0" Jan 23 00:08:03.434288 containerd[2014]: 2026-01-23 00:08:03.337 [INFO][5448] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali13a6c7360fe ContainerID="8eba487f9832ef9d50b572f53080610bb7563087f143a70bf007aacff22a1347" Namespace="calico-system" Pod="calico-kube-controllers-6f7974d7c8-hppng" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--kube--controllers--6f7974d7c8--hppng-eth0" Jan 23 00:08:03.434288 containerd[2014]: 2026-01-23 00:08:03.368 [INFO][5448] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="8eba487f9832ef9d50b572f53080610bb7563087f143a70bf007aacff22a1347" Namespace="calico-system" Pod="calico-kube-controllers-6f7974d7c8-hppng" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--kube--controllers--6f7974d7c8--hppng-eth0" Jan 23 00:08:03.434288 containerd[2014]: 2026-01-23 00:08:03.369 [INFO][5448] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="8eba487f9832ef9d50b572f53080610bb7563087f143a70bf007aacff22a1347" Namespace="calico-system" Pod="calico-kube-controllers-6f7974d7c8-hppng" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--kube--controllers--6f7974d7c8--hppng-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-calico--kube--controllers--6f7974d7c8--hppng-eth0", GenerateName:"calico-kube-controllers-6f7974d7c8-", Namespace:"calico-system", SelfLink:"", UID:"fb41eab3-a03e-4b48-bc83-fecd2d987e90", ResourceVersion:"869", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 7, 38, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"6f7974d7c8", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"8eba487f9832ef9d50b572f53080610bb7563087f143a70bf007aacff22a1347", Pod:"calico-kube-controllers-6f7974d7c8-hppng", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.44.71/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali13a6c7360fe", MAC:"86:03:ba:1d:4b:ec", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:08:03.434288 containerd[2014]: 2026-01-23 00:08:03.416 [INFO][5448] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="8eba487f9832ef9d50b572f53080610bb7563087f143a70bf007aacff22a1347" Namespace="calico-system" Pod="calico-kube-controllers-6f7974d7c8-hppng" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--kube--controllers--6f7974d7c8--hppng-eth0" Jan 23 00:08:03.505857 containerd[2014]: time="2026-01-23T00:08:03.505713565Z" level=info msg="connecting to shim 8eba487f9832ef9d50b572f53080610bb7563087f143a70bf007aacff22a1347" address="unix:///run/containerd/s/2663e6453f6a8b4f0a68f85c2962ab024fd71563f33c49a5a0de5b154c29de08" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:08:03.593781 systemd[1]: Started cri-containerd-8eba487f9832ef9d50b572f53080610bb7563087f143a70bf007aacff22a1347.scope - libcontainer container 8eba487f9832ef9d50b572f53080610bb7563087f143a70bf007aacff22a1347. Jan 23 00:08:03.617697 systemd-networkd[1822]: cali3886c81d861: Gained IPv6LL Jan 23 00:08:03.636836 containerd[2014]: time="2026-01-23T00:08:03.636076843Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866c48949f-zlhcq,Uid:7325a6f4-e6b9-4cb1-9e21-13aa088be606,Namespace:calico-apiserver,Attempt:0,}" Jan 23 00:08:03.825930 containerd[2014]: time="2026-01-23T00:08:03.825743747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-6f7974d7c8-hppng,Uid:fb41eab3-a03e-4b48-bc83-fecd2d987e90,Namespace:calico-system,Attempt:0,} returns sandbox id \"8eba487f9832ef9d50b572f53080610bb7563087f143a70bf007aacff22a1347\"" Jan 23 00:08:03.830807 containerd[2014]: time="2026-01-23T00:08:03.830583518Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 00:08:03.982978 systemd-networkd[1822]: cali4f8067efb9a: Link UP Jan 23 00:08:03.987119 systemd-networkd[1822]: cali4f8067efb9a: Gained carrier Jan 23 00:08:04.029037 containerd[2014]: 2026-01-23 00:08:03.792 [INFO][5563] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ip--172--31--18--130-k8s-calico--apiserver--866c48949f--zlhcq-eth0 calico-apiserver-866c48949f- calico-apiserver 7325a6f4-e6b9-4cb1-9e21-13aa088be606 865 0 2026-01-23 00:07:25 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:866c48949f projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ip-172-31-18-130 calico-apiserver-866c48949f-zlhcq eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali4f8067efb9a [] [] }} ContainerID="9c60d7fcbe573405b4930b03c415bf57abcd83ef4423eb1b67d5df53dfca871e" Namespace="calico-apiserver" Pod="calico-apiserver-866c48949f-zlhcq" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--apiserver--866c48949f--zlhcq-" Jan 23 00:08:04.029037 containerd[2014]: 2026-01-23 00:08:03.792 [INFO][5563] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9c60d7fcbe573405b4930b03c415bf57abcd83ef4423eb1b67d5df53dfca871e" Namespace="calico-apiserver" Pod="calico-apiserver-866c48949f-zlhcq" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--apiserver--866c48949f--zlhcq-eth0" Jan 23 00:08:04.029037 containerd[2014]: 2026-01-23 00:08:03.884 [INFO][5583] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9c60d7fcbe573405b4930b03c415bf57abcd83ef4423eb1b67d5df53dfca871e" HandleID="k8s-pod-network.9c60d7fcbe573405b4930b03c415bf57abcd83ef4423eb1b67d5df53dfca871e" Workload="ip--172--31--18--130-k8s-calico--apiserver--866c48949f--zlhcq-eth0" Jan 23 00:08:04.029037 containerd[2014]: 2026-01-23 00:08:03.884 [INFO][5583] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9c60d7fcbe573405b4930b03c415bf57abcd83ef4423eb1b67d5df53dfca871e" HandleID="k8s-pod-network.9c60d7fcbe573405b4930b03c415bf57abcd83ef4423eb1b67d5df53dfca871e" Workload="ip--172--31--18--130-k8s-calico--apiserver--866c48949f--zlhcq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400031b8f0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ip-172-31-18-130", "pod":"calico-apiserver-866c48949f-zlhcq", "timestamp":"2026-01-23 00:08:03.884126029 +0000 UTC"}, Hostname:"ip-172-31-18-130", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 23 00:08:04.029037 containerd[2014]: 2026-01-23 00:08:03.885 [INFO][5583] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 23 00:08:04.029037 containerd[2014]: 2026-01-23 00:08:03.885 [INFO][5583] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 23 00:08:04.029037 containerd[2014]: 2026-01-23 00:08:03.885 [INFO][5583] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ip-172-31-18-130' Jan 23 00:08:04.029037 containerd[2014]: 2026-01-23 00:08:03.905 [INFO][5583] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9c60d7fcbe573405b4930b03c415bf57abcd83ef4423eb1b67d5df53dfca871e" host="ip-172-31-18-130" Jan 23 00:08:04.029037 containerd[2014]: 2026-01-23 00:08:03.916 [INFO][5583] ipam/ipam.go 394: Looking up existing affinities for host host="ip-172-31-18-130" Jan 23 00:08:04.029037 containerd[2014]: 2026-01-23 00:08:03.932 [INFO][5583] ipam/ipam.go 511: Trying affinity for 192.168.44.64/26 host="ip-172-31-18-130" Jan 23 00:08:04.029037 containerd[2014]: 2026-01-23 00:08:03.937 [INFO][5583] ipam/ipam.go 158: Attempting to load block cidr=192.168.44.64/26 host="ip-172-31-18-130" Jan 23 00:08:04.029037 containerd[2014]: 2026-01-23 00:08:03.943 [INFO][5583] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.44.64/26 host="ip-172-31-18-130" Jan 23 00:08:04.029037 containerd[2014]: 2026-01-23 00:08:03.943 [INFO][5583] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.44.64/26 handle="k8s-pod-network.9c60d7fcbe573405b4930b03c415bf57abcd83ef4423eb1b67d5df53dfca871e" host="ip-172-31-18-130" Jan 23 00:08:04.029037 containerd[2014]: 2026-01-23 00:08:03.947 [INFO][5583] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9c60d7fcbe573405b4930b03c415bf57abcd83ef4423eb1b67d5df53dfca871e Jan 23 00:08:04.029037 containerd[2014]: 2026-01-23 00:08:03.954 [INFO][5583] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.44.64/26 handle="k8s-pod-network.9c60d7fcbe573405b4930b03c415bf57abcd83ef4423eb1b67d5df53dfca871e" host="ip-172-31-18-130" Jan 23 00:08:04.029037 containerd[2014]: 2026-01-23 00:08:03.970 [INFO][5583] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.44.72/26] block=192.168.44.64/26 handle="k8s-pod-network.9c60d7fcbe573405b4930b03c415bf57abcd83ef4423eb1b67d5df53dfca871e" host="ip-172-31-18-130" Jan 23 00:08:04.029037 containerd[2014]: 2026-01-23 00:08:03.971 [INFO][5583] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.44.72/26] handle="k8s-pod-network.9c60d7fcbe573405b4930b03c415bf57abcd83ef4423eb1b67d5df53dfca871e" host="ip-172-31-18-130" Jan 23 00:08:04.029037 containerd[2014]: 2026-01-23 00:08:03.971 [INFO][5583] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 23 00:08:04.029037 containerd[2014]: 2026-01-23 00:08:03.971 [INFO][5583] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.44.72/26] IPv6=[] ContainerID="9c60d7fcbe573405b4930b03c415bf57abcd83ef4423eb1b67d5df53dfca871e" HandleID="k8s-pod-network.9c60d7fcbe573405b4930b03c415bf57abcd83ef4423eb1b67d5df53dfca871e" Workload="ip--172--31--18--130-k8s-calico--apiserver--866c48949f--zlhcq-eth0" Jan 23 00:08:04.030978 containerd[2014]: 2026-01-23 00:08:03.976 [INFO][5563] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9c60d7fcbe573405b4930b03c415bf57abcd83ef4423eb1b67d5df53dfca871e" Namespace="calico-apiserver" Pod="calico-apiserver-866c48949f-zlhcq" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--apiserver--866c48949f--zlhcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-calico--apiserver--866c48949f--zlhcq-eth0", GenerateName:"calico-apiserver-866c48949f-", Namespace:"calico-apiserver", SelfLink:"", UID:"7325a6f4-e6b9-4cb1-9e21-13aa088be606", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 7, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"866c48949f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"", Pod:"calico-apiserver-866c48949f-zlhcq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f8067efb9a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:08:04.030978 containerd[2014]: 2026-01-23 00:08:03.977 [INFO][5563] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.44.72/32] ContainerID="9c60d7fcbe573405b4930b03c415bf57abcd83ef4423eb1b67d5df53dfca871e" Namespace="calico-apiserver" Pod="calico-apiserver-866c48949f-zlhcq" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--apiserver--866c48949f--zlhcq-eth0" Jan 23 00:08:04.030978 containerd[2014]: 2026-01-23 00:08:03.977 [INFO][5563] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4f8067efb9a ContainerID="9c60d7fcbe573405b4930b03c415bf57abcd83ef4423eb1b67d5df53dfca871e" Namespace="calico-apiserver" Pod="calico-apiserver-866c48949f-zlhcq" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--apiserver--866c48949f--zlhcq-eth0" Jan 23 00:08:04.030978 containerd[2014]: 2026-01-23 00:08:03.983 [INFO][5563] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9c60d7fcbe573405b4930b03c415bf57abcd83ef4423eb1b67d5df53dfca871e" Namespace="calico-apiserver" Pod="calico-apiserver-866c48949f-zlhcq" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--apiserver--866c48949f--zlhcq-eth0" Jan 23 00:08:04.030978 containerd[2014]: 2026-01-23 00:08:03.990 [INFO][5563] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9c60d7fcbe573405b4930b03c415bf57abcd83ef4423eb1b67d5df53dfca871e" Namespace="calico-apiserver" Pod="calico-apiserver-866c48949f-zlhcq" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--apiserver--866c48949f--zlhcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ip--172--31--18--130-k8s-calico--apiserver--866c48949f--zlhcq-eth0", GenerateName:"calico-apiserver-866c48949f-", Namespace:"calico-apiserver", SelfLink:"", UID:"7325a6f4-e6b9-4cb1-9e21-13aa088be606", ResourceVersion:"865", Generation:0, CreationTimestamp:time.Date(2026, time.January, 23, 0, 7, 25, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"866c48949f", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ip-172-31-18-130", ContainerID:"9c60d7fcbe573405b4930b03c415bf57abcd83ef4423eb1b67d5df53dfca871e", Pod:"calico-apiserver-866c48949f-zlhcq", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.44.72/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali4f8067efb9a", MAC:"da:3e:5d:3d:7f:24", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 23 00:08:04.030978 containerd[2014]: 2026-01-23 00:08:04.021 [INFO][5563] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9c60d7fcbe573405b4930b03c415bf57abcd83ef4423eb1b67d5df53dfca871e" Namespace="calico-apiserver" Pod="calico-apiserver-866c48949f-zlhcq" WorkloadEndpoint="ip--172--31--18--130-k8s-calico--apiserver--866c48949f--zlhcq-eth0" Jan 23 00:08:04.091401 containerd[2014]: time="2026-01-23T00:08:04.091343178Z" level=info msg="connecting to shim 9c60d7fcbe573405b4930b03c415bf57abcd83ef4423eb1b67d5df53dfca871e" address="unix:///run/containerd/s/e36df4f4ca0b06aa67b2929043e764fc80b00408c4f6e6443c5237deecd15ea1" namespace=k8s.io protocol=ttrpc version=3 Jan 23 00:08:04.137723 containerd[2014]: time="2026-01-23T00:08:04.135989092Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:08:04.165092 containerd[2014]: time="2026-01-23T00:08:04.164961966Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 00:08:04.165520 containerd[2014]: time="2026-01-23T00:08:04.165044485Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 00:08:04.166828 kubelet[3616]: E0123 00:08:04.166461 3616 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 00:08:04.168527 kubelet[3616]: E0123 00:08:04.168214 3616 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 00:08:04.169880 kubelet[3616]: E0123 00:08:04.169650 3616 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6f7974d7c8-hppng_calico-system(fb41eab3-a03e-4b48-bc83-fecd2d987e90): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 00:08:04.171059 kubelet[3616]: E0123 00:08:04.170326 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6f7974d7c8-hppng" podUID="fb41eab3-a03e-4b48-bc83-fecd2d987e90" Jan 23 00:08:04.199185 systemd[1]: Started cri-containerd-9c60d7fcbe573405b4930b03c415bf57abcd83ef4423eb1b67d5df53dfca871e.scope - libcontainer container 9c60d7fcbe573405b4930b03c415bf57abcd83ef4423eb1b67d5df53dfca871e. Jan 23 00:08:04.363788 containerd[2014]: time="2026-01-23T00:08:04.362462355Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-866c48949f-zlhcq,Uid:7325a6f4-e6b9-4cb1-9e21-13aa088be606,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"9c60d7fcbe573405b4930b03c415bf57abcd83ef4423eb1b67d5df53dfca871e\"" Jan 23 00:08:04.369312 containerd[2014]: time="2026-01-23T00:08:04.368764482Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 00:08:04.653703 containerd[2014]: time="2026-01-23T00:08:04.653552646Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:08:04.655531 containerd[2014]: time="2026-01-23T00:08:04.654938792Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 00:08:04.655531 containerd[2014]: time="2026-01-23T00:08:04.654948484Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 00:08:04.655931 kubelet[3616]: E0123 00:08:04.655803 3616 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:08:04.656200 kubelet[3616]: E0123 00:08:04.656109 3616 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:08:04.656348 kubelet[3616]: E0123 00:08:04.656299 3616 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-866c48949f-zlhcq_calico-apiserver(7325a6f4-e6b9-4cb1-9e21-13aa088be606): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 00:08:04.656536 kubelet[3616]: E0123 00:08:04.656369 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866c48949f-zlhcq" podUID="7325a6f4-e6b9-4cb1-9e21-13aa088be606" Jan 23 00:08:04.770897 systemd-networkd[1822]: cali13a6c7360fe: Gained IPv6LL Jan 23 00:08:05.174811 kubelet[3616]: E0123 00:08:05.174274 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6f7974d7c8-hppng" podUID="fb41eab3-a03e-4b48-bc83-fecd2d987e90" Jan 23 00:08:05.177023 kubelet[3616]: E0123 00:08:05.174475 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866c48949f-zlhcq" podUID="7325a6f4-e6b9-4cb1-9e21-13aa088be606" Jan 23 00:08:05.283187 systemd-networkd[1822]: cali4f8067efb9a: Gained IPv6LL Jan 23 00:08:06.175118 kubelet[3616]: E0123 00:08:06.175043 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866c48949f-zlhcq" podUID="7325a6f4-e6b9-4cb1-9e21-13aa088be606" Jan 23 00:08:07.629153 ntpd[2214]: Listen normally on 6 vxlan.calico 192.168.44.64:123 Jan 23 00:08:07.629241 ntpd[2214]: Listen normally on 7 cali81c8552bdf2 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 23 00:08:07.631455 ntpd[2214]: 23 Jan 00:08:07 ntpd[2214]: Listen normally on 6 vxlan.calico 192.168.44.64:123 Jan 23 00:08:07.631455 ntpd[2214]: 23 Jan 00:08:07 ntpd[2214]: Listen normally on 7 cali81c8552bdf2 [fe80::ecee:eeff:feee:eeee%4]:123 Jan 23 00:08:07.631455 ntpd[2214]: 23 Jan 00:08:07 ntpd[2214]: Listen normally on 8 vxlan.calico [fe80::64b5:5ff:fe84:80e8%5]:123 Jan 23 00:08:07.631455 ntpd[2214]: 23 Jan 00:08:07 ntpd[2214]: Listen normally on 9 cali196c20e7785 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 23 00:08:07.631455 ntpd[2214]: 23 Jan 00:08:07 ntpd[2214]: Listen normally on 10 cali270c8aa9270 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 23 00:08:07.631455 ntpd[2214]: 23 Jan 00:08:07 ntpd[2214]: Listen normally on 11 cali83b2c16f1c6 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 23 00:08:07.631455 ntpd[2214]: 23 Jan 00:08:07 ntpd[2214]: Listen normally on 12 calibdd2bac91ce [fe80::ecee:eeff:feee:eeee%11]:123 Jan 23 00:08:07.631455 ntpd[2214]: 23 Jan 00:08:07 ntpd[2214]: Listen normally on 13 cali3886c81d861 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 23 00:08:07.631455 ntpd[2214]: 23 Jan 00:08:07 ntpd[2214]: Listen normally on 14 cali13a6c7360fe [fe80::ecee:eeff:feee:eeee%13]:123 Jan 23 00:08:07.631455 ntpd[2214]: 23 Jan 00:08:07 ntpd[2214]: Listen normally on 15 cali4f8067efb9a [fe80::ecee:eeff:feee:eeee%14]:123 Jan 23 00:08:07.629289 ntpd[2214]: Listen normally on 8 vxlan.calico [fe80::64b5:5ff:fe84:80e8%5]:123 Jan 23 00:08:07.629335 ntpd[2214]: Listen normally on 9 cali196c20e7785 [fe80::ecee:eeff:feee:eeee%8]:123 Jan 23 00:08:07.629379 ntpd[2214]: Listen normally on 10 cali270c8aa9270 [fe80::ecee:eeff:feee:eeee%9]:123 Jan 23 00:08:07.629428 ntpd[2214]: Listen normally on 11 cali83b2c16f1c6 [fe80::ecee:eeff:feee:eeee%10]:123 Jan 23 00:08:07.629472 ntpd[2214]: Listen normally on 12 calibdd2bac91ce [fe80::ecee:eeff:feee:eeee%11]:123 Jan 23 00:08:07.629545 ntpd[2214]: Listen normally on 13 cali3886c81d861 [fe80::ecee:eeff:feee:eeee%12]:123 Jan 23 00:08:07.629590 ntpd[2214]: Listen normally on 14 cali13a6c7360fe [fe80::ecee:eeff:feee:eeee%13]:123 Jan 23 00:08:07.629644 ntpd[2214]: Listen normally on 15 cali4f8067efb9a [fe80::ecee:eeff:feee:eeee%14]:123 Jan 23 00:08:08.434405 systemd[1]: Started sshd@8-172.31.18.130:22-4.153.228.146:36962.service - OpenSSH per-connection server daemon (4.153.228.146:36962). Jan 23 00:08:08.975452 sshd[5662]: Accepted publickey for core from 4.153.228.146 port 36962 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:08:08.978828 sshd-session[5662]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:08:08.987960 systemd-logind[1983]: New session 9 of user core. Jan 23 00:08:08.993741 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 23 00:08:09.581505 sshd[5666]: Connection closed by 4.153.228.146 port 36962 Jan 23 00:08:09.582448 sshd-session[5662]: pam_unix(sshd:session): session closed for user core Jan 23 00:08:09.591185 systemd[1]: sshd@8-172.31.18.130:22-4.153.228.146:36962.service: Deactivated successfully. Jan 23 00:08:09.596766 systemd[1]: session-9.scope: Deactivated successfully. Jan 23 00:08:09.598990 systemd-logind[1983]: Session 9 logged out. Waiting for processes to exit. Jan 23 00:08:09.602404 systemd-logind[1983]: Removed session 9. Jan 23 00:08:12.636574 containerd[2014]: time="2026-01-23T00:08:12.635338356Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 00:08:12.931580 containerd[2014]: time="2026-01-23T00:08:12.931270841Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:08:12.933309 containerd[2014]: time="2026-01-23T00:08:12.933206864Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 00:08:12.933593 containerd[2014]: time="2026-01-23T00:08:12.933261449Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 00:08:12.934027 kubelet[3616]: E0123 00:08:12.933955 3616 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 00:08:12.934622 kubelet[3616]: E0123 00:08:12.934025 3616 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 00:08:12.934622 kubelet[3616]: E0123 00:08:12.934132 3616 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-bd87c786-pqgwc_calico-system(0b4bad72-057e-4231-8c95-8f0d608e570d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 00:08:12.936445 containerd[2014]: time="2026-01-23T00:08:12.936379002Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 00:08:13.211964 containerd[2014]: time="2026-01-23T00:08:13.211666130Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:08:13.213181 containerd[2014]: time="2026-01-23T00:08:13.213067173Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 00:08:13.213708 containerd[2014]: time="2026-01-23T00:08:13.213135191Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 00:08:13.213972 kubelet[3616]: E0123 00:08:13.213887 3616 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 00:08:13.214099 kubelet[3616]: E0123 00:08:13.213981 3616 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 00:08:13.214737 kubelet[3616]: E0123 00:08:13.214135 3616 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-bd87c786-pqgwc_calico-system(0b4bad72-057e-4231-8c95-8f0d608e570d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 00:08:13.214895 kubelet[3616]: E0123 00:08:13.214780 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bd87c786-pqgwc" podUID="0b4bad72-057e-4231-8c95-8f0d608e570d" Jan 23 00:08:14.677539 systemd[1]: Started sshd@9-172.31.18.130:22-4.153.228.146:45170.service - OpenSSH per-connection server daemon (4.153.228.146:45170). Jan 23 00:08:15.195474 sshd[5699]: Accepted publickey for core from 4.153.228.146 port 45170 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:08:15.198895 sshd-session[5699]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:08:15.214971 systemd-logind[1983]: New session 10 of user core. Jan 23 00:08:15.224784 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 23 00:08:15.634109 containerd[2014]: time="2026-01-23T00:08:15.634051618Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 00:08:15.669884 sshd[5704]: Connection closed by 4.153.228.146 port 45170 Jan 23 00:08:15.670353 sshd-session[5699]: pam_unix(sshd:session): session closed for user core Jan 23 00:08:15.681275 systemd[1]: sshd@9-172.31.18.130:22-4.153.228.146:45170.service: Deactivated successfully. Jan 23 00:08:15.686117 systemd[1]: session-10.scope: Deactivated successfully. Jan 23 00:08:15.689338 systemd-logind[1983]: Session 10 logged out. Waiting for processes to exit. Jan 23 00:08:15.694002 systemd-logind[1983]: Removed session 10. Jan 23 00:08:15.765877 systemd[1]: Started sshd@10-172.31.18.130:22-4.153.228.146:45172.service - OpenSSH per-connection server daemon (4.153.228.146:45172). Jan 23 00:08:15.919980 containerd[2014]: time="2026-01-23T00:08:15.919784441Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:08:15.921197 containerd[2014]: time="2026-01-23T00:08:15.921106048Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 00:08:15.921364 containerd[2014]: time="2026-01-23T00:08:15.921267068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 00:08:15.921684 kubelet[3616]: E0123 00:08:15.921567 3616 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 00:08:15.921684 kubelet[3616]: E0123 00:08:15.921667 3616 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 00:08:15.922293 kubelet[3616]: E0123 00:08:15.922064 3616 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-h2nff_calico-system(6e08ee65-394c-47ae-9b9c-08be18fa8e62): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 00:08:15.922293 kubelet[3616]: E0123 00:08:15.922151 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h2nff" podUID="6e08ee65-394c-47ae-9b9c-08be18fa8e62" Jan 23 00:08:15.923127 containerd[2014]: time="2026-01-23T00:08:15.922933622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 00:08:16.189474 containerd[2014]: time="2026-01-23T00:08:16.189303199Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:08:16.191047 containerd[2014]: time="2026-01-23T00:08:16.190953478Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 00:08:16.191224 containerd[2014]: time="2026-01-23T00:08:16.190969250Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 00:08:16.191619 kubelet[3616]: E0123 00:08:16.191548 3616 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 00:08:16.191747 kubelet[3616]: E0123 00:08:16.191630 3616 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 00:08:16.191832 kubelet[3616]: E0123 00:08:16.191750 3616 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-d5rlp_calico-system(73991cb4-51f1-4920-a4d2-a782912c4922): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 00:08:16.194826 containerd[2014]: time="2026-01-23T00:08:16.194742586Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 00:08:16.308908 sshd[5720]: Accepted publickey for core from 4.153.228.146 port 45172 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:08:16.311569 sshd-session[5720]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:08:16.321982 systemd-logind[1983]: New session 11 of user core. Jan 23 00:08:16.331854 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 23 00:08:16.480893 containerd[2014]: time="2026-01-23T00:08:16.480728159Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:08:16.482146 containerd[2014]: time="2026-01-23T00:08:16.482067757Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 00:08:16.482335 containerd[2014]: time="2026-01-23T00:08:16.482236081Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 00:08:16.484963 kubelet[3616]: E0123 00:08:16.484861 3616 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 00:08:16.484963 kubelet[3616]: E0123 00:08:16.484946 3616 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 00:08:16.485194 kubelet[3616]: E0123 00:08:16.485082 3616 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-d5rlp_calico-system(73991cb4-51f1-4920-a4d2-a782912c4922): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 00:08:16.485308 kubelet[3616]: E0123 00:08:16.485169 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-d5rlp" podUID="73991cb4-51f1-4920-a4d2-a782912c4922" Jan 23 00:08:16.917833 sshd[5723]: Connection closed by 4.153.228.146 port 45172 Jan 23 00:08:16.957982 sshd-session[5720]: pam_unix(sshd:session): session closed for user core Jan 23 00:08:16.965853 systemd[1]: sshd@10-172.31.18.130:22-4.153.228.146:45172.service: Deactivated successfully. Jan 23 00:08:16.971293 systemd[1]: session-11.scope: Deactivated successfully. Jan 23 00:08:16.973635 systemd-logind[1983]: Session 11 logged out. Waiting for processes to exit. Jan 23 00:08:16.977718 systemd-logind[1983]: Removed session 11. Jan 23 00:08:17.009471 systemd[1]: Started sshd@11-172.31.18.130:22-4.153.228.146:45188.service - OpenSSH per-connection server daemon (4.153.228.146:45188). Jan 23 00:08:17.529416 sshd[5733]: Accepted publickey for core from 4.153.228.146 port 45188 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:08:17.531894 sshd-session[5733]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:08:17.540756 systemd-logind[1983]: New session 12 of user core. Jan 23 00:08:17.547763 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 23 00:08:17.635456 containerd[2014]: time="2026-01-23T00:08:17.635372659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 00:08:17.911019 containerd[2014]: time="2026-01-23T00:08:17.910839338Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:08:17.913313 containerd[2014]: time="2026-01-23T00:08:17.913162983Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 00:08:17.913313 containerd[2014]: time="2026-01-23T00:08:17.913231541Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 00:08:17.914128 kubelet[3616]: E0123 00:08:17.913576 3616 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:08:17.914128 kubelet[3616]: E0123 00:08:17.913644 3616 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:08:17.914128 kubelet[3616]: E0123 00:08:17.913777 3616 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-866c48949f-lh2s6_calico-apiserver(9783fed8-ce36-4bde-9a81-2ed0b850cd1e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 00:08:17.914128 kubelet[3616]: E0123 00:08:17.913836 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866c48949f-lh2s6" podUID="9783fed8-ce36-4bde-9a81-2ed0b850cd1e" Jan 23 00:08:18.045557 sshd[5736]: Connection closed by 4.153.228.146 port 45188 Jan 23 00:08:18.044991 sshd-session[5733]: pam_unix(sshd:session): session closed for user core Jan 23 00:08:18.052883 systemd[1]: sshd@11-172.31.18.130:22-4.153.228.146:45188.service: Deactivated successfully. Jan 23 00:08:18.059810 systemd[1]: session-12.scope: Deactivated successfully. Jan 23 00:08:18.061750 systemd-logind[1983]: Session 12 logged out. Waiting for processes to exit. Jan 23 00:08:18.065570 systemd-logind[1983]: Removed session 12. Jan 23 00:08:20.635555 containerd[2014]: time="2026-01-23T00:08:20.635267786Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 00:08:20.895826 containerd[2014]: time="2026-01-23T00:08:20.895654804Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:08:20.897220 containerd[2014]: time="2026-01-23T00:08:20.897116441Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 00:08:20.897220 containerd[2014]: time="2026-01-23T00:08:20.897183200Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 00:08:20.898068 kubelet[3616]: E0123 00:08:20.897757 3616 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:08:20.898068 kubelet[3616]: E0123 00:08:20.897843 3616 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:08:20.898068 kubelet[3616]: E0123 00:08:20.898036 3616 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-866c48949f-zlhcq_calico-apiserver(7325a6f4-e6b9-4cb1-9e21-13aa088be606): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 00:08:20.899359 kubelet[3616]: E0123 00:08:20.898577 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866c48949f-zlhcq" podUID="7325a6f4-e6b9-4cb1-9e21-13aa088be606" Jan 23 00:08:20.899653 containerd[2014]: time="2026-01-23T00:08:20.898312735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 00:08:21.151267 containerd[2014]: time="2026-01-23T00:08:21.151097632Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:08:21.152518 containerd[2014]: time="2026-01-23T00:08:21.152407928Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 00:08:21.152891 containerd[2014]: time="2026-01-23T00:08:21.152481104Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 00:08:21.152965 kubelet[3616]: E0123 00:08:21.152915 3616 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 00:08:21.153041 kubelet[3616]: E0123 00:08:21.152977 3616 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 00:08:21.153137 kubelet[3616]: E0123 00:08:21.153089 3616 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6f7974d7c8-hppng_calico-system(fb41eab3-a03e-4b48-bc83-fecd2d987e90): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 00:08:21.153327 kubelet[3616]: E0123 00:08:21.153159 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6f7974d7c8-hppng" podUID="fb41eab3-a03e-4b48-bc83-fecd2d987e90" Jan 23 00:08:23.156946 systemd[1]: Started sshd@12-172.31.18.130:22-4.153.228.146:45190.service - OpenSSH per-connection server daemon (4.153.228.146:45190). Jan 23 00:08:23.728106 sshd[5758]: Accepted publickey for core from 4.153.228.146 port 45190 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:08:23.732740 sshd-session[5758]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:08:23.745773 systemd-logind[1983]: New session 13 of user core. Jan 23 00:08:23.754787 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 23 00:08:24.250651 sshd[5762]: Connection closed by 4.153.228.146 port 45190 Jan 23 00:08:24.251538 sshd-session[5758]: pam_unix(sshd:session): session closed for user core Jan 23 00:08:24.259898 systemd-logind[1983]: Session 13 logged out. Waiting for processes to exit. Jan 23 00:08:24.261431 systemd[1]: sshd@12-172.31.18.130:22-4.153.228.146:45190.service: Deactivated successfully. Jan 23 00:08:24.268221 systemd[1]: session-13.scope: Deactivated successfully. Jan 23 00:08:24.272456 systemd-logind[1983]: Removed session 13. Jan 23 00:08:25.642466 kubelet[3616]: E0123 00:08:25.642382 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bd87c786-pqgwc" podUID="0b4bad72-057e-4231-8c95-8f0d608e570d" Jan 23 00:08:27.642144 kubelet[3616]: E0123 00:08:27.641911 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h2nff" podUID="6e08ee65-394c-47ae-9b9c-08be18fa8e62" Jan 23 00:08:28.636289 kubelet[3616]: E0123 00:08:28.636166 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-d5rlp" podUID="73991cb4-51f1-4920-a4d2-a782912c4922" Jan 23 00:08:29.351021 systemd[1]: Started sshd@13-172.31.18.130:22-4.153.228.146:41984.service - OpenSSH per-connection server daemon (4.153.228.146:41984). Jan 23 00:08:29.913612 sshd[5800]: Accepted publickey for core from 4.153.228.146 port 41984 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:08:29.916674 sshd-session[5800]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:08:29.925133 systemd-logind[1983]: New session 14 of user core. Jan 23 00:08:29.933751 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 23 00:08:30.432767 sshd[5803]: Connection closed by 4.153.228.146 port 41984 Jan 23 00:08:30.433287 sshd-session[5800]: pam_unix(sshd:session): session closed for user core Jan 23 00:08:30.442193 systemd[1]: sshd@13-172.31.18.130:22-4.153.228.146:41984.service: Deactivated successfully. Jan 23 00:08:30.448822 systemd[1]: session-14.scope: Deactivated successfully. Jan 23 00:08:30.453514 systemd-logind[1983]: Session 14 logged out. Waiting for processes to exit. Jan 23 00:08:30.456220 systemd-logind[1983]: Removed session 14. Jan 23 00:08:33.637796 kubelet[3616]: E0123 00:08:33.637596 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866c48949f-lh2s6" podUID="9783fed8-ce36-4bde-9a81-2ed0b850cd1e" Jan 23 00:08:34.634625 kubelet[3616]: E0123 00:08:34.633608 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866c48949f-zlhcq" podUID="7325a6f4-e6b9-4cb1-9e21-13aa088be606" Jan 23 00:08:35.528833 systemd[1]: Started sshd@14-172.31.18.130:22-4.153.228.146:57126.service - OpenSSH per-connection server daemon (4.153.228.146:57126). Jan 23 00:08:35.632825 kubelet[3616]: E0123 00:08:35.632761 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6f7974d7c8-hppng" podUID="fb41eab3-a03e-4b48-bc83-fecd2d987e90" Jan 23 00:08:36.048309 sshd[5817]: Accepted publickey for core from 4.153.228.146 port 57126 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:08:36.051455 sshd-session[5817]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:08:36.061642 systemd-logind[1983]: New session 15 of user core. Jan 23 00:08:36.066854 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 23 00:08:36.539059 sshd[5820]: Connection closed by 4.153.228.146 port 57126 Jan 23 00:08:36.539644 sshd-session[5817]: pam_unix(sshd:session): session closed for user core Jan 23 00:08:36.554382 systemd[1]: sshd@14-172.31.18.130:22-4.153.228.146:57126.service: Deactivated successfully. Jan 23 00:08:36.561103 systemd[1]: session-15.scope: Deactivated successfully. Jan 23 00:08:36.566122 systemd-logind[1983]: Session 15 logged out. Waiting for processes to exit. Jan 23 00:08:36.571958 systemd-logind[1983]: Removed session 15. Jan 23 00:08:40.635726 containerd[2014]: time="2026-01-23T00:08:40.635658552Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 00:08:40.937233 containerd[2014]: time="2026-01-23T00:08:40.936783755Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:08:40.938855 containerd[2014]: time="2026-01-23T00:08:40.938695466Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 00:08:40.939283 containerd[2014]: time="2026-01-23T00:08:40.938837068Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 00:08:40.940762 kubelet[3616]: E0123 00:08:40.939689 3616 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 00:08:40.940762 kubelet[3616]: E0123 00:08:40.939749 3616 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 00:08:40.940762 kubelet[3616]: E0123 00:08:40.939982 3616 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-bd87c786-pqgwc_calico-system(0b4bad72-057e-4231-8c95-8f0d608e570d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 00:08:40.942038 containerd[2014]: time="2026-01-23T00:08:40.941234872Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 00:08:41.251840 containerd[2014]: time="2026-01-23T00:08:41.251226136Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:08:41.252632 containerd[2014]: time="2026-01-23T00:08:41.252551761Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 00:08:41.252741 containerd[2014]: time="2026-01-23T00:08:41.252683419Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 00:08:41.253280 kubelet[3616]: E0123 00:08:41.252991 3616 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 00:08:41.253280 kubelet[3616]: E0123 00:08:41.253061 3616 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 00:08:41.253473 kubelet[3616]: E0123 00:08:41.253284 3616 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-h2nff_calico-system(6e08ee65-394c-47ae-9b9c-08be18fa8e62): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 00:08:41.254261 kubelet[3616]: E0123 00:08:41.253453 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h2nff" podUID="6e08ee65-394c-47ae-9b9c-08be18fa8e62" Jan 23 00:08:41.255649 containerd[2014]: time="2026-01-23T00:08:41.255562735Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 00:08:41.535154 containerd[2014]: time="2026-01-23T00:08:41.534660992Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:08:41.536229 containerd[2014]: time="2026-01-23T00:08:41.536156848Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 00:08:41.536548 containerd[2014]: time="2026-01-23T00:08:41.536281802Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 00:08:41.537538 kubelet[3616]: E0123 00:08:41.536665 3616 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 00:08:41.537538 kubelet[3616]: E0123 00:08:41.536723 3616 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 00:08:41.537538 kubelet[3616]: E0123 00:08:41.536828 3616 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-bd87c786-pqgwc_calico-system(0b4bad72-057e-4231-8c95-8f0d608e570d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 00:08:41.537804 kubelet[3616]: E0123 00:08:41.536892 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bd87c786-pqgwc" podUID="0b4bad72-057e-4231-8c95-8f0d608e570d" Jan 23 00:08:41.638309 containerd[2014]: time="2026-01-23T00:08:41.638067084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 00:08:41.655409 systemd[1]: Started sshd@15-172.31.18.130:22-4.153.228.146:57132.service - OpenSSH per-connection server daemon (4.153.228.146:57132). Jan 23 00:08:41.920430 containerd[2014]: time="2026-01-23T00:08:41.920337683Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:08:41.921792 containerd[2014]: time="2026-01-23T00:08:41.921651757Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 00:08:41.922690 containerd[2014]: time="2026-01-23T00:08:41.921755649Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 00:08:41.922867 kubelet[3616]: E0123 00:08:41.922231 3616 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 00:08:41.922867 kubelet[3616]: E0123 00:08:41.922292 3616 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 00:08:41.922867 kubelet[3616]: E0123 00:08:41.922425 3616 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-d5rlp_calico-system(73991cb4-51f1-4920-a4d2-a782912c4922): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 00:08:41.925406 containerd[2014]: time="2026-01-23T00:08:41.925341196Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 00:08:42.209673 containerd[2014]: time="2026-01-23T00:08:42.209142313Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:08:42.210460 containerd[2014]: time="2026-01-23T00:08:42.210384819Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 00:08:42.210598 containerd[2014]: time="2026-01-23T00:08:42.210425515Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 00:08:42.210843 kubelet[3616]: E0123 00:08:42.210784 3616 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 00:08:42.211343 kubelet[3616]: E0123 00:08:42.210856 3616 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 00:08:42.211343 kubelet[3616]: E0123 00:08:42.210959 3616 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-d5rlp_calico-system(73991cb4-51f1-4920-a4d2-a782912c4922): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 00:08:42.211343 kubelet[3616]: E0123 00:08:42.211035 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-d5rlp" podUID="73991cb4-51f1-4920-a4d2-a782912c4922" Jan 23 00:08:42.233370 sshd[5840]: Accepted publickey for core from 4.153.228.146 port 57132 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:08:42.236283 sshd-session[5840]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:08:42.245186 systemd-logind[1983]: New session 16 of user core. Jan 23 00:08:42.256307 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 23 00:08:42.751883 sshd[5843]: Connection closed by 4.153.228.146 port 57132 Jan 23 00:08:42.752662 sshd-session[5840]: pam_unix(sshd:session): session closed for user core Jan 23 00:08:42.764672 systemd[1]: sshd@15-172.31.18.130:22-4.153.228.146:57132.service: Deactivated successfully. Jan 23 00:08:42.769987 systemd[1]: session-16.scope: Deactivated successfully. Jan 23 00:08:42.773395 systemd-logind[1983]: Session 16 logged out. Waiting for processes to exit. Jan 23 00:08:42.776985 systemd-logind[1983]: Removed session 16. Jan 23 00:08:42.838388 systemd[1]: Started sshd@16-172.31.18.130:22-4.153.228.146:57140.service - OpenSSH per-connection server daemon (4.153.228.146:57140). Jan 23 00:08:43.367049 sshd[5855]: Accepted publickey for core from 4.153.228.146 port 57140 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:08:43.369478 sshd-session[5855]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:08:43.378639 systemd-logind[1983]: New session 17 of user core. Jan 23 00:08:43.385828 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 23 00:08:44.642890 containerd[2014]: time="2026-01-23T00:08:44.641920375Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 00:08:44.941798 containerd[2014]: time="2026-01-23T00:08:44.941342730Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:08:44.943530 containerd[2014]: time="2026-01-23T00:08:44.943377164Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 00:08:44.943692 containerd[2014]: time="2026-01-23T00:08:44.943574057Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 00:08:44.944102 kubelet[3616]: E0123 00:08:44.944037 3616 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:08:44.945309 kubelet[3616]: E0123 00:08:44.944108 3616 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:08:44.945309 kubelet[3616]: E0123 00:08:44.944219 3616 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-866c48949f-lh2s6_calico-apiserver(9783fed8-ce36-4bde-9a81-2ed0b850cd1e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 00:08:44.945309 kubelet[3616]: E0123 00:08:44.944276 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866c48949f-lh2s6" podUID="9783fed8-ce36-4bde-9a81-2ed0b850cd1e" Jan 23 00:08:45.080002 sshd[5858]: Connection closed by 4.153.228.146 port 57140 Jan 23 00:08:45.080844 sshd-session[5855]: pam_unix(sshd:session): session closed for user core Jan 23 00:08:45.088855 systemd[1]: sshd@16-172.31.18.130:22-4.153.228.146:57140.service: Deactivated successfully. Jan 23 00:08:45.092761 systemd[1]: session-17.scope: Deactivated successfully. Jan 23 00:08:45.095075 systemd-logind[1983]: Session 17 logged out. Waiting for processes to exit. Jan 23 00:08:45.098823 systemd-logind[1983]: Removed session 17. Jan 23 00:08:45.185615 systemd[1]: Started sshd@17-172.31.18.130:22-4.153.228.146:33956.service - OpenSSH per-connection server daemon (4.153.228.146:33956). Jan 23 00:08:45.759261 sshd[5870]: Accepted publickey for core from 4.153.228.146 port 33956 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:08:45.761159 sshd-session[5870]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:08:45.771614 systemd-logind[1983]: New session 18 of user core. Jan 23 00:08:45.780814 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 23 00:08:46.645407 containerd[2014]: time="2026-01-23T00:08:46.645324893Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 00:08:46.943066 containerd[2014]: time="2026-01-23T00:08:46.942881006Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:08:46.944244 containerd[2014]: time="2026-01-23T00:08:46.944133647Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 00:08:46.944244 containerd[2014]: time="2026-01-23T00:08:46.944202661Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 00:08:46.944570 kubelet[3616]: E0123 00:08:46.944479 3616 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:08:46.945085 kubelet[3616]: E0123 00:08:46.944582 3616 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:08:46.945085 kubelet[3616]: E0123 00:08:46.944867 3616 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-866c48949f-zlhcq_calico-apiserver(7325a6f4-e6b9-4cb1-9e21-13aa088be606): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 00:08:46.945085 kubelet[3616]: E0123 00:08:46.944927 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866c48949f-zlhcq" podUID="7325a6f4-e6b9-4cb1-9e21-13aa088be606" Jan 23 00:08:47.274916 sshd[5873]: Connection closed by 4.153.228.146 port 33956 Jan 23 00:08:47.276022 sshd-session[5870]: pam_unix(sshd:session): session closed for user core Jan 23 00:08:47.285099 systemd[1]: sshd@17-172.31.18.130:22-4.153.228.146:33956.service: Deactivated successfully. Jan 23 00:08:47.290517 systemd[1]: session-18.scope: Deactivated successfully. Jan 23 00:08:47.292869 systemd-logind[1983]: Session 18 logged out. Waiting for processes to exit. Jan 23 00:08:47.297477 systemd-logind[1983]: Removed session 18. Jan 23 00:08:47.361749 systemd[1]: Started sshd@18-172.31.18.130:22-4.153.228.146:33958.service - OpenSSH per-connection server daemon (4.153.228.146:33958). Jan 23 00:08:47.636613 containerd[2014]: time="2026-01-23T00:08:47.636531869Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 00:08:47.896722 sshd[5891]: Accepted publickey for core from 4.153.228.146 port 33958 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:08:47.899540 sshd-session[5891]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:08:47.908637 systemd-logind[1983]: New session 19 of user core. Jan 23 00:08:47.913862 containerd[2014]: time="2026-01-23T00:08:47.913662787Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:08:47.914938 containerd[2014]: time="2026-01-23T00:08:47.914774427Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 00:08:47.914938 containerd[2014]: time="2026-01-23T00:08:47.914899584Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 00:08:47.915304 kubelet[3616]: E0123 00:08:47.915246 3616 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 00:08:47.915669 kubelet[3616]: E0123 00:08:47.915316 3616 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 00:08:47.915836 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 23 00:08:47.916636 kubelet[3616]: E0123 00:08:47.915473 3616 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6f7974d7c8-hppng_calico-system(fb41eab3-a03e-4b48-bc83-fecd2d987e90): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 00:08:47.917082 kubelet[3616]: E0123 00:08:47.916660 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6f7974d7c8-hppng" podUID="fb41eab3-a03e-4b48-bc83-fecd2d987e90" Jan 23 00:08:48.699240 sshd[5894]: Connection closed by 4.153.228.146 port 33958 Jan 23 00:08:48.699769 sshd-session[5891]: pam_unix(sshd:session): session closed for user core Jan 23 00:08:48.709874 systemd[1]: sshd@18-172.31.18.130:22-4.153.228.146:33958.service: Deactivated successfully. Jan 23 00:08:48.715167 systemd[1]: session-19.scope: Deactivated successfully. Jan 23 00:08:48.719825 systemd-logind[1983]: Session 19 logged out. Waiting for processes to exit. Jan 23 00:08:48.723996 systemd-logind[1983]: Removed session 19. Jan 23 00:08:48.790329 systemd[1]: Started sshd@19-172.31.18.130:22-4.153.228.146:33964.service - OpenSSH per-connection server daemon (4.153.228.146:33964). Jan 23 00:08:49.322487 sshd[5908]: Accepted publickey for core from 4.153.228.146 port 33964 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:08:49.330040 sshd-session[5908]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:08:49.344910 systemd-logind[1983]: New session 20 of user core. Jan 23 00:08:49.352827 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 23 00:08:49.859227 sshd[5911]: Connection closed by 4.153.228.146 port 33964 Jan 23 00:08:49.858966 sshd-session[5908]: pam_unix(sshd:session): session closed for user core Jan 23 00:08:49.869277 systemd-logind[1983]: Session 20 logged out. Waiting for processes to exit. Jan 23 00:08:49.869744 systemd[1]: sshd@19-172.31.18.130:22-4.153.228.146:33964.service: Deactivated successfully. Jan 23 00:08:49.874900 systemd[1]: session-20.scope: Deactivated successfully. Jan 23 00:08:49.881377 systemd-logind[1983]: Removed session 20. Jan 23 00:08:53.635902 kubelet[3616]: E0123 00:08:53.635615 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bd87c786-pqgwc" podUID="0b4bad72-057e-4231-8c95-8f0d608e570d" Jan 23 00:08:54.632604 kubelet[3616]: E0123 00:08:54.632081 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h2nff" podUID="6e08ee65-394c-47ae-9b9c-08be18fa8e62" Jan 23 00:08:54.953614 systemd[1]: Started sshd@20-172.31.18.130:22-4.153.228.146:35118.service - OpenSSH per-connection server daemon (4.153.228.146:35118). Jan 23 00:08:55.470862 sshd[5925]: Accepted publickey for core from 4.153.228.146 port 35118 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:08:55.475386 sshd-session[5925]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:08:55.489003 systemd-logind[1983]: New session 21 of user core. Jan 23 00:08:55.497834 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 23 00:08:55.635011 kubelet[3616]: E0123 00:08:55.634582 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866c48949f-lh2s6" podUID="9783fed8-ce36-4bde-9a81-2ed0b850cd1e" Jan 23 00:08:56.019676 sshd[5928]: Connection closed by 4.153.228.146 port 35118 Jan 23 00:08:56.021721 sshd-session[5925]: pam_unix(sshd:session): session closed for user core Jan 23 00:08:56.034294 systemd-logind[1983]: Session 21 logged out. Waiting for processes to exit. Jan 23 00:08:56.034482 systemd[1]: sshd@20-172.31.18.130:22-4.153.228.146:35118.service: Deactivated successfully. Jan 23 00:08:56.042018 systemd[1]: session-21.scope: Deactivated successfully. Jan 23 00:08:56.049158 systemd-logind[1983]: Removed session 21. Jan 23 00:08:57.640044 kubelet[3616]: E0123 00:08:57.639958 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-d5rlp" podUID="73991cb4-51f1-4920-a4d2-a782912c4922" Jan 23 00:09:01.116766 systemd[1]: Started sshd@21-172.31.18.130:22-4.153.228.146:35132.service - OpenSSH per-connection server daemon (4.153.228.146:35132). Jan 23 00:09:01.637147 kubelet[3616]: E0123 00:09:01.636370 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866c48949f-zlhcq" podUID="7325a6f4-e6b9-4cb1-9e21-13aa088be606" Jan 23 00:09:01.638422 kubelet[3616]: E0123 00:09:01.638246 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6f7974d7c8-hppng" podUID="fb41eab3-a03e-4b48-bc83-fecd2d987e90" Jan 23 00:09:01.664050 sshd[5971]: Accepted publickey for core from 4.153.228.146 port 35132 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:09:01.669867 sshd-session[5971]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:09:01.684075 systemd-logind[1983]: New session 22 of user core. Jan 23 00:09:01.692712 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 23 00:09:02.176517 sshd[5974]: Connection closed by 4.153.228.146 port 35132 Jan 23 00:09:02.177247 sshd-session[5971]: pam_unix(sshd:session): session closed for user core Jan 23 00:09:02.189052 systemd[1]: sshd@21-172.31.18.130:22-4.153.228.146:35132.service: Deactivated successfully. Jan 23 00:09:02.195841 systemd[1]: session-22.scope: Deactivated successfully. Jan 23 00:09:02.199537 systemd-logind[1983]: Session 22 logged out. Waiting for processes to exit. Jan 23 00:09:02.204474 systemd-logind[1983]: Removed session 22. Jan 23 00:09:04.636243 kubelet[3616]: E0123 00:09:04.636131 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bd87c786-pqgwc" podUID="0b4bad72-057e-4231-8c95-8f0d608e570d" Jan 23 00:09:06.635002 kubelet[3616]: E0123 00:09:06.633950 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h2nff" podUID="6e08ee65-394c-47ae-9b9c-08be18fa8e62" Jan 23 00:09:06.636887 kubelet[3616]: E0123 00:09:06.636617 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866c48949f-lh2s6" podUID="9783fed8-ce36-4bde-9a81-2ed0b850cd1e" Jan 23 00:09:07.273955 systemd[1]: Started sshd@22-172.31.18.130:22-4.153.228.146:48926.service - OpenSSH per-connection server daemon (4.153.228.146:48926). Jan 23 00:09:07.835221 sshd[5988]: Accepted publickey for core from 4.153.228.146 port 48926 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:09:07.839717 sshd-session[5988]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:09:07.854352 systemd-logind[1983]: New session 23 of user core. Jan 23 00:09:07.861959 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 23 00:09:08.356530 sshd[5993]: Connection closed by 4.153.228.146 port 48926 Jan 23 00:09:08.357453 sshd-session[5988]: pam_unix(sshd:session): session closed for user core Jan 23 00:09:08.366465 systemd[1]: sshd@22-172.31.18.130:22-4.153.228.146:48926.service: Deactivated successfully. Jan 23 00:09:08.368261 systemd-logind[1983]: Session 23 logged out. Waiting for processes to exit. Jan 23 00:09:08.373561 systemd[1]: session-23.scope: Deactivated successfully. Jan 23 00:09:08.381570 systemd-logind[1983]: Removed session 23. Jan 23 00:09:09.638405 kubelet[3616]: E0123 00:09:09.638251 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-d5rlp" podUID="73991cb4-51f1-4920-a4d2-a782912c4922" Jan 23 00:09:12.634656 kubelet[3616]: E0123 00:09:12.634130 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6f7974d7c8-hppng" podUID="fb41eab3-a03e-4b48-bc83-fecd2d987e90" Jan 23 00:09:12.635282 kubelet[3616]: E0123 00:09:12.634956 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866c48949f-zlhcq" podUID="7325a6f4-e6b9-4cb1-9e21-13aa088be606" Jan 23 00:09:13.449830 systemd[1]: Started sshd@23-172.31.18.130:22-4.153.228.146:48930.service - OpenSSH per-connection server daemon (4.153.228.146:48930). Jan 23 00:09:13.988012 sshd[6008]: Accepted publickey for core from 4.153.228.146 port 48930 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:09:13.991315 sshd-session[6008]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:09:14.002700 systemd-logind[1983]: New session 24 of user core. Jan 23 00:09:14.010758 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 23 00:09:14.522671 sshd[6011]: Connection closed by 4.153.228.146 port 48930 Jan 23 00:09:14.524083 sshd-session[6008]: pam_unix(sshd:session): session closed for user core Jan 23 00:09:14.534626 systemd[1]: sshd@23-172.31.18.130:22-4.153.228.146:48930.service: Deactivated successfully. Jan 23 00:09:14.539434 systemd[1]: session-24.scope: Deactivated successfully. Jan 23 00:09:14.549225 systemd-logind[1983]: Session 24 logged out. Waiting for processes to exit. Jan 23 00:09:14.552382 systemd-logind[1983]: Removed session 24. Jan 23 00:09:17.634198 kubelet[3616]: E0123 00:09:17.634113 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866c48949f-lh2s6" podUID="9783fed8-ce36-4bde-9a81-2ed0b850cd1e" Jan 23 00:09:19.627039 systemd[1]: Started sshd@24-172.31.18.130:22-4.153.228.146:53816.service - OpenSSH per-connection server daemon (4.153.228.146:53816). Jan 23 00:09:19.642055 kubelet[3616]: E0123 00:09:19.641401 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bd87c786-pqgwc" podUID="0b4bad72-057e-4231-8c95-8f0d608e570d" Jan 23 00:09:20.209045 sshd[6023]: Accepted publickey for core from 4.153.228.146 port 53816 ssh2: RSA SHA256:OQwekwJG2VWm71TI4Ud3tpaGRVIoR2yiBkuCxEn+5Ac Jan 23 00:09:20.212457 sshd-session[6023]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 23 00:09:20.223860 systemd-logind[1983]: New session 25 of user core. Jan 23 00:09:20.231105 systemd[1]: Started session-25.scope - Session 25 of User core. Jan 23 00:09:20.632225 kubelet[3616]: E0123 00:09:20.632085 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h2nff" podUID="6e08ee65-394c-47ae-9b9c-08be18fa8e62" Jan 23 00:09:20.797406 sshd[6032]: Connection closed by 4.153.228.146 port 53816 Jan 23 00:09:20.864751 sshd-session[6023]: pam_unix(sshd:session): session closed for user core Jan 23 00:09:20.873234 systemd[1]: sshd@24-172.31.18.130:22-4.153.228.146:53816.service: Deactivated successfully. Jan 23 00:09:20.881921 systemd[1]: session-25.scope: Deactivated successfully. Jan 23 00:09:20.885264 systemd-logind[1983]: Session 25 logged out. Waiting for processes to exit. Jan 23 00:09:20.891479 systemd-logind[1983]: Removed session 25. Jan 23 00:09:23.636907 containerd[2014]: time="2026-01-23T00:09:23.636211840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 23 00:09:23.637560 kubelet[3616]: E0123 00:09:23.637099 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866c48949f-zlhcq" podUID="7325a6f4-e6b9-4cb1-9e21-13aa088be606" Jan 23 00:09:23.639097 kubelet[3616]: E0123 00:09:23.638997 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6f7974d7c8-hppng" podUID="fb41eab3-a03e-4b48-bc83-fecd2d987e90" Jan 23 00:09:23.903256 containerd[2014]: time="2026-01-23T00:09:23.902960751Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:09:23.905909 containerd[2014]: time="2026-01-23T00:09:23.905770550Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 23 00:09:23.906068 containerd[2014]: time="2026-01-23T00:09:23.905799875Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=69" Jan 23 00:09:23.906554 kubelet[3616]: E0123 00:09:23.906466 3616 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 00:09:23.906714 kubelet[3616]: E0123 00:09:23.906560 3616 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 23 00:09:23.906714 kubelet[3616]: E0123 00:09:23.906686 3616 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-csi start failed in pod csi-node-driver-d5rlp_calico-system(73991cb4-51f1-4920-a4d2-a782912c4922): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/csi:v3.30.4\": ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 23 00:09:23.908372 containerd[2014]: time="2026-01-23T00:09:23.908304918Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 23 00:09:24.208307 containerd[2014]: time="2026-01-23T00:09:24.207629126Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:09:24.208935 containerd[2014]: time="2026-01-23T00:09:24.208857167Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 23 00:09:24.209056 containerd[2014]: time="2026-01-23T00:09:24.208980022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=93" Jan 23 00:09:24.209412 kubelet[3616]: E0123 00:09:24.209299 3616 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 00:09:24.209540 kubelet[3616]: E0123 00:09:24.209408 3616 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 23 00:09:24.211014 kubelet[3616]: E0123 00:09:24.210635 3616 kuberuntime_manager.go:1449] "Unhandled Error" err="container csi-node-driver-registrar start failed in pod csi-node-driver-d5rlp_calico-system(73991cb4-51f1-4920-a4d2-a782912c4922): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 23 00:09:24.211014 kubelet[3616]: E0123 00:09:24.210746 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-d5rlp" podUID="73991cb4-51f1-4920-a4d2-a782912c4922" Jan 23 00:09:31.634986 containerd[2014]: time="2026-01-23T00:09:31.633715843Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 00:09:31.935042 containerd[2014]: time="2026-01-23T00:09:31.933892033Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:09:31.935435 containerd[2014]: time="2026-01-23T00:09:31.935291505Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 00:09:31.935435 containerd[2014]: time="2026-01-23T00:09:31.935395349Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 00:09:31.935897 kubelet[3616]: E0123 00:09:31.935831 3616 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:09:31.936725 kubelet[3616]: E0123 00:09:31.935906 3616 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:09:31.936725 kubelet[3616]: E0123 00:09:31.936022 3616 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-866c48949f-lh2s6_calico-apiserver(9783fed8-ce36-4bde-9a81-2ed0b850cd1e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 00:09:31.936725 kubelet[3616]: E0123 00:09:31.936081 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866c48949f-lh2s6" podUID="9783fed8-ce36-4bde-9a81-2ed0b850cd1e" Jan 23 00:09:32.633540 containerd[2014]: time="2026-01-23T00:09:32.633349899Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 23 00:09:32.946430 containerd[2014]: time="2026-01-23T00:09:32.946263944Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:09:32.947534 containerd[2014]: time="2026-01-23T00:09:32.947411075Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 23 00:09:32.947687 containerd[2014]: time="2026-01-23T00:09:32.947447416Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=73" Jan 23 00:09:32.947944 kubelet[3616]: E0123 00:09:32.947884 3616 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 00:09:32.948945 kubelet[3616]: E0123 00:09:32.947958 3616 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 23 00:09:32.948945 kubelet[3616]: E0123 00:09:32.948072 3616 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker start failed in pod whisker-bd87c786-pqgwc_calico-system(0b4bad72-057e-4231-8c95-8f0d608e570d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker:v3.30.4\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 23 00:09:32.950430 containerd[2014]: time="2026-01-23T00:09:32.950372477Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 23 00:09:33.197648 containerd[2014]: time="2026-01-23T00:09:33.197273286Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:09:33.198758 containerd[2014]: time="2026-01-23T00:09:33.198648194Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 23 00:09:33.198887 containerd[2014]: time="2026-01-23T00:09:33.198695594Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=85" Jan 23 00:09:33.199380 kubelet[3616]: E0123 00:09:33.199299 3616 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 00:09:33.199380 kubelet[3616]: E0123 00:09:33.199362 3616 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 23 00:09:33.199779 kubelet[3616]: E0123 00:09:33.199523 3616 kuberuntime_manager.go:1449] "Unhandled Error" err="container whisker-backend start failed in pod whisker-bd87c786-pqgwc_calico-system(0b4bad72-057e-4231-8c95-8f0d608e570d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 23 00:09:33.199779 kubelet[3616]: E0123 00:09:33.199623 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bd87c786-pqgwc" podUID="0b4bad72-057e-4231-8c95-8f0d608e570d" Jan 23 00:09:34.632906 containerd[2014]: time="2026-01-23T00:09:34.632805273Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 23 00:09:34.889054 containerd[2014]: time="2026-01-23T00:09:34.888884034Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:09:34.890456 containerd[2014]: time="2026-01-23T00:09:34.890379134Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 23 00:09:34.890663 containerd[2014]: time="2026-01-23T00:09:34.890532202Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=77" Jan 23 00:09:34.890939 kubelet[3616]: E0123 00:09:34.890867 3616 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 00:09:34.891483 kubelet[3616]: E0123 00:09:34.890949 3616 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 23 00:09:34.891483 kubelet[3616]: E0123 00:09:34.891058 3616 kuberuntime_manager.go:1449] "Unhandled Error" err="container goldmane start failed in pod goldmane-7c778bb748-h2nff_calico-system(6e08ee65-394c-47ae-9b9c-08be18fa8e62): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 23 00:09:34.891483 kubelet[3616]: E0123 00:09:34.891109 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h2nff" podUID="6e08ee65-394c-47ae-9b9c-08be18fa8e62" Jan 23 00:09:35.021137 systemd[1]: cri-containerd-c991e5ff3ac42b53c26813f8a743c6afde0e248dfd018efd56d86ec84cdc880b.scope: Deactivated successfully. Jan 23 00:09:35.023377 systemd[1]: cri-containerd-c991e5ff3ac42b53c26813f8a743c6afde0e248dfd018efd56d86ec84cdc880b.scope: Consumed 7.580s CPU time, 60.6M memory peak, 196K read from disk. Jan 23 00:09:35.030948 containerd[2014]: time="2026-01-23T00:09:35.030867160Z" level=info msg="received container exit event container_id:\"c991e5ff3ac42b53c26813f8a743c6afde0e248dfd018efd56d86ec84cdc880b\" id:\"c991e5ff3ac42b53c26813f8a743c6afde0e248dfd018efd56d86ec84cdc880b\" pid:3185 exit_status:1 exited_at:{seconds:1769126975 nanos:29813163}" Jan 23 00:09:35.079665 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-c991e5ff3ac42b53c26813f8a743c6afde0e248dfd018efd56d86ec84cdc880b-rootfs.mount: Deactivated successfully. Jan 23 00:09:35.521618 kubelet[3616]: I0123 00:09:35.521103 3616 scope.go:117] "RemoveContainer" containerID="c991e5ff3ac42b53c26813f8a743c6afde0e248dfd018efd56d86ec84cdc880b" Jan 23 00:09:35.525646 containerd[2014]: time="2026-01-23T00:09:35.525578726Z" level=info msg="CreateContainer within sandbox \"2716ef6bb024a53fb959880973b278733aeee7454689bf619a2f2929f5e675d1\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 23 00:09:35.582075 containerd[2014]: time="2026-01-23T00:09:35.580376469Z" level=info msg="Container fe9361f412ecd0bc4c180b30ba88f84c4fd26d66cd1360aaf463674ff5ec3955: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:09:35.598100 containerd[2014]: time="2026-01-23T00:09:35.598001196Z" level=info msg="CreateContainer within sandbox \"2716ef6bb024a53fb959880973b278733aeee7454689bf619a2f2929f5e675d1\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"fe9361f412ecd0bc4c180b30ba88f84c4fd26d66cd1360aaf463674ff5ec3955\"" Jan 23 00:09:35.599557 containerd[2014]: time="2026-01-23T00:09:35.599266048Z" level=info msg="StartContainer for \"fe9361f412ecd0bc4c180b30ba88f84c4fd26d66cd1360aaf463674ff5ec3955\"" Jan 23 00:09:35.601747 containerd[2014]: time="2026-01-23T00:09:35.601667762Z" level=info msg="connecting to shim fe9361f412ecd0bc4c180b30ba88f84c4fd26d66cd1360aaf463674ff5ec3955" address="unix:///run/containerd/s/5addb1488a7cc4b7811e252b6c5b9d437f768bdf939ddc624b000f7ba5177f99" protocol=ttrpc version=3 Jan 23 00:09:35.643820 systemd[1]: Started cri-containerd-fe9361f412ecd0bc4c180b30ba88f84c4fd26d66cd1360aaf463674ff5ec3955.scope - libcontainer container fe9361f412ecd0bc4c180b30ba88f84c4fd26d66cd1360aaf463674ff5ec3955. Jan 23 00:09:35.739375 containerd[2014]: time="2026-01-23T00:09:35.739311063Z" level=info msg="StartContainer for \"fe9361f412ecd0bc4c180b30ba88f84c4fd26d66cd1360aaf463674ff5ec3955\" returns successfully" Jan 23 00:09:36.279103 systemd[1]: cri-containerd-6d167f1a66f985f7504b5a7c278e8e7bdcd4f43395fff56b962aedaab984e034.scope: Deactivated successfully. Jan 23 00:09:36.279710 systemd[1]: cri-containerd-6d167f1a66f985f7504b5a7c278e8e7bdcd4f43395fff56b962aedaab984e034.scope: Consumed 27.347s CPU time, 101.6M memory peak. Jan 23 00:09:36.286901 containerd[2014]: time="2026-01-23T00:09:36.286787857Z" level=info msg="received container exit event container_id:\"6d167f1a66f985f7504b5a7c278e8e7bdcd4f43395fff56b962aedaab984e034\" id:\"6d167f1a66f985f7504b5a7c278e8e7bdcd4f43395fff56b962aedaab984e034\" pid:3952 exit_status:1 exited_at:{seconds:1769126976 nanos:285943215}" Jan 23 00:09:36.347361 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-6d167f1a66f985f7504b5a7c278e8e7bdcd4f43395fff56b962aedaab984e034-rootfs.mount: Deactivated successfully. Jan 23 00:09:36.531693 kubelet[3616]: I0123 00:09:36.531223 3616 scope.go:117] "RemoveContainer" containerID="6d167f1a66f985f7504b5a7c278e8e7bdcd4f43395fff56b962aedaab984e034" Jan 23 00:09:36.537948 containerd[2014]: time="2026-01-23T00:09:36.537369528Z" level=info msg="CreateContainer within sandbox \"80b35f2d2418acef6452a39ab99854173d5d19a89e04dfce63b83c61deeffcba\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 23 00:09:36.558993 containerd[2014]: time="2026-01-23T00:09:36.558939262Z" level=info msg="Container cd11141fa2c8ac8f1cd99bb6fe8b09a1d7635ec14a26d77f267da1452db777be: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:09:36.582137 containerd[2014]: time="2026-01-23T00:09:36.582050105Z" level=info msg="CreateContainer within sandbox \"80b35f2d2418acef6452a39ab99854173d5d19a89e04dfce63b83c61deeffcba\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"cd11141fa2c8ac8f1cd99bb6fe8b09a1d7635ec14a26d77f267da1452db777be\"" Jan 23 00:09:36.583750 containerd[2014]: time="2026-01-23T00:09:36.583526422Z" level=info msg="StartContainer for \"cd11141fa2c8ac8f1cd99bb6fe8b09a1d7635ec14a26d77f267da1452db777be\"" Jan 23 00:09:36.586075 containerd[2014]: time="2026-01-23T00:09:36.585933066Z" level=info msg="connecting to shim cd11141fa2c8ac8f1cd99bb6fe8b09a1d7635ec14a26d77f267da1452db777be" address="unix:///run/containerd/s/1a2f7e10d93a63f78e8384340d022cb9d4a5aa4b7fd76c7e4914d115a5e85cf1" protocol=ttrpc version=3 Jan 23 00:09:36.630998 systemd[1]: Started cri-containerd-cd11141fa2c8ac8f1cd99bb6fe8b09a1d7635ec14a26d77f267da1452db777be.scope - libcontainer container cd11141fa2c8ac8f1cd99bb6fe8b09a1d7635ec14a26d77f267da1452db777be. Jan 23 00:09:36.640868 containerd[2014]: time="2026-01-23T00:09:36.640803578Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 23 00:09:36.641376 kubelet[3616]: E0123 00:09:36.641196 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-d5rlp" podUID="73991cb4-51f1-4920-a4d2-a782912c4922" Jan 23 00:09:36.729699 containerd[2014]: time="2026-01-23T00:09:36.729427184Z" level=info msg="StartContainer for \"cd11141fa2c8ac8f1cd99bb6fe8b09a1d7635ec14a26d77f267da1452db777be\" returns successfully" Jan 23 00:09:36.913614 containerd[2014]: time="2026-01-23T00:09:36.913547498Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:09:36.918210 containerd[2014]: time="2026-01-23T00:09:36.917554790Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 23 00:09:36.918210 containerd[2014]: time="2026-01-23T00:09:36.917696271Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=85" Jan 23 00:09:36.918411 kubelet[3616]: E0123 00:09:36.917874 3616 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 00:09:36.918411 kubelet[3616]: E0123 00:09:36.917930 3616 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 23 00:09:36.918411 kubelet[3616]: E0123 00:09:36.918030 3616 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-kube-controllers start failed in pod calico-kube-controllers-6f7974d7c8-hppng_calico-system(fb41eab3-a03e-4b48-bc83-fecd2d987e90): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 23 00:09:36.918411 kubelet[3616]: E0123 00:09:36.918120 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6f7974d7c8-hppng" podUID="fb41eab3-a03e-4b48-bc83-fecd2d987e90" Jan 23 00:09:37.633688 containerd[2014]: time="2026-01-23T00:09:37.632684892Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 23 00:09:37.930622 containerd[2014]: time="2026-01-23T00:09:37.930458012Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 23 00:09:37.932016 containerd[2014]: time="2026-01-23T00:09:37.931895876Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 23 00:09:37.932145 containerd[2014]: time="2026-01-23T00:09:37.931965921Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=77" Jan 23 00:09:37.932370 kubelet[3616]: E0123 00:09:37.932315 3616 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:09:37.932864 kubelet[3616]: E0123 00:09:37.932379 3616 kuberuntime_image.go:43] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 23 00:09:37.932864 kubelet[3616]: E0123 00:09:37.932642 3616 kuberuntime_manager.go:1449] "Unhandled Error" err="container calico-apiserver start failed in pod calico-apiserver-866c48949f-zlhcq_calico-apiserver(7325a6f4-e6b9-4cb1-9e21-13aa088be606): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve reference \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 23 00:09:37.932864 kubelet[3616]: E0123 00:09:37.932732 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866c48949f-zlhcq" podUID="7325a6f4-e6b9-4cb1-9e21-13aa088be606" Jan 23 00:09:39.681486 systemd[1]: cri-containerd-59695cb77765d3dd70e1f5cb307d3559ace321001fcb7603a6d303345b3e1cb6.scope: Deactivated successfully. Jan 23 00:09:39.682167 systemd[1]: cri-containerd-59695cb77765d3dd70e1f5cb307d3559ace321001fcb7603a6d303345b3e1cb6.scope: Consumed 5.663s CPU time, 20.9M memory peak. Jan 23 00:09:39.688305 containerd[2014]: time="2026-01-23T00:09:39.688203520Z" level=info msg="received container exit event container_id:\"59695cb77765d3dd70e1f5cb307d3559ace321001fcb7603a6d303345b3e1cb6\" id:\"59695cb77765d3dd70e1f5cb307d3559ace321001fcb7603a6d303345b3e1cb6\" pid:3178 exit_status:1 exited_at:{seconds:1769126979 nanos:687776606}" Jan 23 00:09:39.741724 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-59695cb77765d3dd70e1f5cb307d3559ace321001fcb7603a6d303345b3e1cb6-rootfs.mount: Deactivated successfully. Jan 23 00:09:40.566835 kubelet[3616]: I0123 00:09:40.566790 3616 scope.go:117] "RemoveContainer" containerID="59695cb77765d3dd70e1f5cb307d3559ace321001fcb7603a6d303345b3e1cb6" Jan 23 00:09:40.571521 containerd[2014]: time="2026-01-23T00:09:40.571232808Z" level=info msg="CreateContainer within sandbox \"0e0bebd83bc5352751456c4093e47c425a0ed9148228242aa66481ec72883d05\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 23 00:09:40.584913 containerd[2014]: time="2026-01-23T00:09:40.584847977Z" level=info msg="Container 63034955e49224f47260856cfbdcf719382db69d0abc00a31f85a2970c59a3c4: CDI devices from CRI Config.CDIDevices: []" Jan 23 00:09:40.598357 containerd[2014]: time="2026-01-23T00:09:40.598307439Z" level=info msg="CreateContainer within sandbox \"0e0bebd83bc5352751456c4093e47c425a0ed9148228242aa66481ec72883d05\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"63034955e49224f47260856cfbdcf719382db69d0abc00a31f85a2970c59a3c4\"" Jan 23 00:09:40.599937 containerd[2014]: time="2026-01-23T00:09:40.599790210Z" level=info msg="StartContainer for \"63034955e49224f47260856cfbdcf719382db69d0abc00a31f85a2970c59a3c4\"" Jan 23 00:09:40.603312 containerd[2014]: time="2026-01-23T00:09:40.603231552Z" level=info msg="connecting to shim 63034955e49224f47260856cfbdcf719382db69d0abc00a31f85a2970c59a3c4" address="unix:///run/containerd/s/ce01ae9628162ffe250699adff47b8a9473f4dc28981542f95b114c934a9fbe8" protocol=ttrpc version=3 Jan 23 00:09:40.642796 systemd[1]: Started cri-containerd-63034955e49224f47260856cfbdcf719382db69d0abc00a31f85a2970c59a3c4.scope - libcontainer container 63034955e49224f47260856cfbdcf719382db69d0abc00a31f85a2970c59a3c4. Jan 23 00:09:40.727432 containerd[2014]: time="2026-01-23T00:09:40.727260259Z" level=info msg="StartContainer for \"63034955e49224f47260856cfbdcf719382db69d0abc00a31f85a2970c59a3c4\" returns successfully" Jan 23 00:09:40.812308 kubelet[3616]: E0123 00:09:40.812019 3616 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-130?timeout=10s\": context deadline exceeded" Jan 23 00:09:43.638521 kubelet[3616]: E0123 00:09:43.638443 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866c48949f-lh2s6" podUID="9783fed8-ce36-4bde-9a81-2ed0b850cd1e" Jan 23 00:09:45.637937 kubelet[3616]: E0123 00:09:45.637866 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bd87c786-pqgwc" podUID="0b4bad72-057e-4231-8c95-8f0d608e570d" Jan 23 00:09:47.632482 kubelet[3616]: E0123 00:09:47.632339 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-7c778bb748-h2nff" podUID="6e08ee65-394c-47ae-9b9c-08be18fa8e62" Jan 23 00:09:48.244766 systemd[1]: cri-containerd-cd11141fa2c8ac8f1cd99bb6fe8b09a1d7635ec14a26d77f267da1452db777be.scope: Deactivated successfully. Jan 23 00:09:48.247344 containerd[2014]: time="2026-01-23T00:09:48.245350192Z" level=info msg="received container exit event container_id:\"cd11141fa2c8ac8f1cd99bb6fe8b09a1d7635ec14a26d77f267da1452db777be\" id:\"cd11141fa2c8ac8f1cd99bb6fe8b09a1d7635ec14a26d77f267da1452db777be\" pid:6153 exit_status:1 exited_at:{seconds:1769126988 nanos:244987973}" Jan 23 00:09:48.286879 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cd11141fa2c8ac8f1cd99bb6fe8b09a1d7635ec14a26d77f267da1452db777be-rootfs.mount: Deactivated successfully. Jan 23 00:09:48.607384 kubelet[3616]: I0123 00:09:48.607328 3616 scope.go:117] "RemoveContainer" containerID="6d167f1a66f985f7504b5a7c278e8e7bdcd4f43395fff56b962aedaab984e034" Jan 23 00:09:48.608031 kubelet[3616]: I0123 00:09:48.607997 3616 scope.go:117] "RemoveContainer" containerID="cd11141fa2c8ac8f1cd99bb6fe8b09a1d7635ec14a26d77f267da1452db777be" Jan 23 00:09:48.608273 kubelet[3616]: E0123 00:09:48.608230 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-65cdcdfd6d-xzvjm_tigera-operator(c3a52394-8874-499e-80dd-a505c33670e9)\"" pod="tigera-operator/tigera-operator-65cdcdfd6d-xzvjm" podUID="c3a52394-8874-499e-80dd-a505c33670e9" Jan 23 00:09:48.612769 containerd[2014]: time="2026-01-23T00:09:48.612715727Z" level=info msg="RemoveContainer for \"6d167f1a66f985f7504b5a7c278e8e7bdcd4f43395fff56b962aedaab984e034\"" Jan 23 00:09:48.622635 containerd[2014]: time="2026-01-23T00:09:48.622452084Z" level=info msg="RemoveContainer for \"6d167f1a66f985f7504b5a7c278e8e7bdcd4f43395fff56b962aedaab984e034\" returns successfully" Jan 23 00:09:50.632440 kubelet[3616]: E0123 00:09:50.632369 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-6f7974d7c8-hppng" podUID="fb41eab3-a03e-4b48-bc83-fecd2d987e90" Jan 23 00:09:50.812668 kubelet[3616]: E0123 00:09:50.812595 3616 controller.go:195] "Failed to update lease" err="Put \"https://172.31.18.130:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ip-172-31-18-130?timeout=10s\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 00:09:51.633909 kubelet[3616]: E0123 00:09:51.632760 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866c48949f-zlhcq" podUID="7325a6f4-e6b9-4cb1-9e21-13aa088be606" Jan 23 00:09:51.635333 kubelet[3616]: E0123 00:09:51.635253 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-d5rlp" podUID="73991cb4-51f1-4920-a4d2-a782912c4922" Jan 23 00:09:55.632528 kubelet[3616]: E0123 00:09:55.632449 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-866c48949f-lh2s6" podUID="9783fed8-ce36-4bde-9a81-2ed0b850cd1e" Jan 23 00:09:56.632757 kubelet[3616]: E0123 00:09:56.632667 3616 pod_workers.go:1324] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve reference \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-bd87c786-pqgwc" podUID="0b4bad72-057e-4231-8c95-8f0d608e570d"