Jan 14 23:44:18.517560 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 14 23:44:18.517584 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Wed Jan 14 22:02:18 -00 2026 Jan 14 23:44:18.517594 kernel: KASLR enabled Jan 14 23:44:18.517600 kernel: efi: EFI v2.7 by EDK II Jan 14 23:44:18.517605 kernel: efi: SMBIOS 3.0=0x43bed0000 MEMATTR=0x43a714018 ACPI 2.0=0x438430018 RNG=0x43843e818 MEMRESERVE=0x438357218 Jan 14 23:44:18.517611 kernel: random: crng init done Jan 14 23:44:18.517619 kernel: secureboot: Secure boot disabled Jan 14 23:44:18.517625 kernel: ACPI: Early table checksum verification disabled Jan 14 23:44:18.517631 kernel: ACPI: RSDP 0x0000000438430018 000024 (v02 BOCHS ) Jan 14 23:44:18.517638 kernel: ACPI: XSDT 0x000000043843FE98 000074 (v01 BOCHS BXPC 00000001 01000013) Jan 14 23:44:18.517645 kernel: ACPI: FACP 0x000000043843FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 23:44:18.517651 kernel: ACPI: DSDT 0x0000000438437518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 23:44:18.517657 kernel: ACPI: APIC 0x000000043843FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 23:44:18.517663 kernel: ACPI: PPTT 0x000000043843D898 000114 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 23:44:18.517672 kernel: ACPI: GTDT 0x000000043843E898 000068 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 23:44:18.517678 kernel: ACPI: MCFG 0x000000043843FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 23:44:18.517685 kernel: ACPI: SPCR 0x000000043843E498 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 23:44:18.517691 kernel: ACPI: DBG2 0x000000043843E798 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 23:44:18.517698 kernel: ACPI: SRAT 0x000000043843E518 0000A0 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 23:44:18.517704 kernel: ACPI: IORT 0x000000043843E618 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 14 23:44:18.517711 kernel: ACPI: BGRT 0x000000043843E718 000038 (v01 INTEL EDK2 00000002 01000013) Jan 14 23:44:18.517717 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 14 23:44:18.517724 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 14 23:44:18.517732 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000-0x43fffffff] Jan 14 23:44:18.517738 kernel: NODE_DATA(0) allocated [mem 0x43dff1a00-0x43dff8fff] Jan 14 23:44:18.517744 kernel: Zone ranges: Jan 14 23:44:18.517751 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 14 23:44:18.517757 kernel: DMA32 empty Jan 14 23:44:18.517763 kernel: Normal [mem 0x0000000100000000-0x000000043fffffff] Jan 14 23:44:18.517770 kernel: Device empty Jan 14 23:44:18.517776 kernel: Movable zone start for each node Jan 14 23:44:18.517782 kernel: Early memory node ranges Jan 14 23:44:18.517789 kernel: node 0: [mem 0x0000000040000000-0x000000043843ffff] Jan 14 23:44:18.517795 kernel: node 0: [mem 0x0000000438440000-0x000000043872ffff] Jan 14 23:44:18.517801 kernel: node 0: [mem 0x0000000438730000-0x000000043bbfffff] Jan 14 23:44:18.517809 kernel: node 0: [mem 0x000000043bc00000-0x000000043bfdffff] Jan 14 23:44:18.517816 kernel: node 0: [mem 0x000000043bfe0000-0x000000043fffffff] Jan 14 23:44:18.517822 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x000000043fffffff] Jan 14 23:44:18.517829 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Jan 14 23:44:18.517835 kernel: psci: probing for conduit method from ACPI. Jan 14 23:44:18.517845 kernel: psci: PSCIv1.3 detected in firmware. Jan 14 23:44:18.517853 kernel: psci: Using standard PSCI v0.2 function IDs Jan 14 23:44:18.517866 kernel: psci: Trusted OS migration not required Jan 14 23:44:18.517873 kernel: psci: SMC Calling Convention v1.1 Jan 14 23:44:18.517880 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 14 23:44:18.517887 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 14 23:44:18.517894 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 14 23:44:18.517900 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x2 -> Node 0 Jan 14 23:44:18.517907 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x3 -> Node 0 Jan 14 23:44:18.517916 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 14 23:44:18.517923 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 14 23:44:18.517929 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 14 23:44:18.517936 kernel: Detected PIPT I-cache on CPU0 Jan 14 23:44:18.517943 kernel: CPU features: detected: GIC system register CPU interface Jan 14 23:44:18.517950 kernel: CPU features: detected: Spectre-v4 Jan 14 23:44:18.517957 kernel: CPU features: detected: Spectre-BHB Jan 14 23:44:18.517963 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 14 23:44:18.517970 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 14 23:44:18.517977 kernel: CPU features: detected: ARM erratum 1418040 Jan 14 23:44:18.517984 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 14 23:44:18.517992 kernel: alternatives: applying boot alternatives Jan 14 23:44:18.518000 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=openstack verity.usrhash=e4a6d042213df6c386c00b2ef561482ef59cf24ca6770345ce520c577e366e5a Jan 14 23:44:18.518007 kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jan 14 23:44:18.518014 kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 14 23:44:18.518021 kernel: Fallback order for Node 0: 0 Jan 14 23:44:18.518028 kernel: Built 1 zonelists, mobility grouping on. Total pages: 4194304 Jan 14 23:44:18.518034 kernel: Policy zone: Normal Jan 14 23:44:18.518041 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 14 23:44:18.518055 kernel: software IO TLB: area num 4. Jan 14 23:44:18.518062 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Jan 14 23:44:18.518072 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 14 23:44:18.518078 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 14 23:44:18.518086 kernel: rcu: RCU event tracing is enabled. Jan 14 23:44:18.518093 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 14 23:44:18.518100 kernel: Trampoline variant of Tasks RCU enabled. Jan 14 23:44:18.518107 kernel: Tracing variant of Tasks RCU enabled. Jan 14 23:44:18.518114 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 14 23:44:18.518121 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 14 23:44:18.518128 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 23:44:18.518135 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 14 23:44:18.518141 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 14 23:44:18.518150 kernel: GICv3: 256 SPIs implemented Jan 14 23:44:18.518156 kernel: GICv3: 0 Extended SPIs implemented Jan 14 23:44:18.518163 kernel: Root IRQ handler: gic_handle_irq Jan 14 23:44:18.518170 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 14 23:44:18.518177 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 14 23:44:18.518183 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 14 23:44:18.518190 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 14 23:44:18.518197 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100110000 (indirect, esz 8, psz 64K, shr 1) Jan 14 23:44:18.518204 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100120000 (flat, esz 8, psz 64K, shr 1) Jan 14 23:44:18.518211 kernel: GICv3: using LPI property table @0x0000000100130000 Jan 14 23:44:18.518218 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100140000 Jan 14 23:44:18.518224 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 14 23:44:18.518233 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 14 23:44:18.518240 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 14 23:44:18.518247 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 14 23:44:18.518253 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 14 23:44:18.518260 kernel: arm-pv: using stolen time PV Jan 14 23:44:18.518620 kernel: Console: colour dummy device 80x25 Jan 14 23:44:18.518630 kernel: ACPI: Core revision 20240827 Jan 14 23:44:18.518638 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 14 23:44:18.518650 kernel: pid_max: default: 32768 minimum: 301 Jan 14 23:44:18.518658 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 14 23:44:18.518665 kernel: landlock: Up and running. Jan 14 23:44:18.518673 kernel: SELinux: Initializing. Jan 14 23:44:18.518680 kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 14 23:44:18.518687 kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 14 23:44:18.518695 kernel: rcu: Hierarchical SRCU implementation. Jan 14 23:44:18.518703 kernel: rcu: Max phase no-delay instances is 400. Jan 14 23:44:18.518712 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 14 23:44:18.518719 kernel: Remapping and enabling EFI services. Jan 14 23:44:18.518726 kernel: smp: Bringing up secondary CPUs ... Jan 14 23:44:18.518733 kernel: Detected PIPT I-cache on CPU1 Jan 14 23:44:18.518741 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 14 23:44:18.518748 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100150000 Jan 14 23:44:18.518755 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 14 23:44:18.518764 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 14 23:44:18.518771 kernel: Detected PIPT I-cache on CPU2 Jan 14 23:44:18.518783 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 14 23:44:18.518792 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000100160000 Jan 14 23:44:18.518800 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 14 23:44:18.518807 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 14 23:44:18.518814 kernel: Detected PIPT I-cache on CPU3 Jan 14 23:44:18.518822 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 14 23:44:18.518831 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000100170000 Jan 14 23:44:18.518839 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 14 23:44:18.518846 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 14 23:44:18.518853 kernel: smp: Brought up 1 node, 4 CPUs Jan 14 23:44:18.518861 kernel: SMP: Total of 4 processors activated. Jan 14 23:44:18.518869 kernel: CPU: All CPU(s) started at EL1 Jan 14 23:44:18.518878 kernel: CPU features: detected: 32-bit EL0 Support Jan 14 23:44:18.518885 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 14 23:44:18.518893 kernel: CPU features: detected: Common not Private translations Jan 14 23:44:18.518901 kernel: CPU features: detected: CRC32 instructions Jan 14 23:44:18.518908 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 14 23:44:18.518916 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 14 23:44:18.518923 kernel: CPU features: detected: LSE atomic instructions Jan 14 23:44:18.518932 kernel: CPU features: detected: Privileged Access Never Jan 14 23:44:18.518939 kernel: CPU features: detected: RAS Extension Support Jan 14 23:44:18.518947 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 14 23:44:18.518954 kernel: alternatives: applying system-wide alternatives Jan 14 23:44:18.518962 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jan 14 23:44:18.518970 kernel: Memory: 16324496K/16777216K available (11200K kernel code, 2458K rwdata, 9088K rodata, 12416K init, 1038K bss, 429936K reserved, 16384K cma-reserved) Jan 14 23:44:18.518978 kernel: devtmpfs: initialized Jan 14 23:44:18.518987 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 14 23:44:18.518994 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 14 23:44:18.519002 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 14 23:44:18.519009 kernel: 0 pages in range for non-PLT usage Jan 14 23:44:18.519025 kernel: 515184 pages in range for PLT usage Jan 14 23:44:18.519032 kernel: pinctrl core: initialized pinctrl subsystem Jan 14 23:44:18.519040 kernel: SMBIOS 3.0.0 present. Jan 14 23:44:18.519048 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Jan 14 23:44:18.519057 kernel: DMI: Memory slots populated: 1/1 Jan 14 23:44:18.519065 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 14 23:44:18.519072 kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations Jan 14 23:44:18.519080 kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 14 23:44:18.519087 kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 14 23:44:18.519095 kernel: audit: initializing netlink subsys (disabled) Jan 14 23:44:18.519103 kernel: audit: type=2000 audit(0.038:1): state=initialized audit_enabled=0 res=1 Jan 14 23:44:18.519112 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 14 23:44:18.519119 kernel: cpuidle: using governor menu Jan 14 23:44:18.519127 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 14 23:44:18.519134 kernel: ASID allocator initialised with 32768 entries Jan 14 23:44:18.519142 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 14 23:44:18.519149 kernel: Serial: AMBA PL011 UART driver Jan 14 23:44:18.519157 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 14 23:44:18.519166 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 14 23:44:18.519174 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 14 23:44:18.519181 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 14 23:44:18.519189 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 14 23:44:18.519196 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 14 23:44:18.519204 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 14 23:44:18.519211 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 14 23:44:18.519220 kernel: ACPI: Added _OSI(Module Device) Jan 14 23:44:18.519228 kernel: ACPI: Added _OSI(Processor Device) Jan 14 23:44:18.519235 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 14 23:44:18.519243 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 14 23:44:18.519250 kernel: ACPI: Interpreter enabled Jan 14 23:44:18.519258 kernel: ACPI: Using GIC for interrupt routing Jan 14 23:44:18.519287 kernel: ACPI: MCFG table detected, 1 entries Jan 14 23:44:18.519296 kernel: ACPI: CPU0 has been hot-added Jan 14 23:44:18.519306 kernel: ACPI: CPU1 has been hot-added Jan 14 23:44:18.519314 kernel: ACPI: CPU2 has been hot-added Jan 14 23:44:18.519321 kernel: ACPI: CPU3 has been hot-added Jan 14 23:44:18.519329 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 14 23:44:18.519336 kernel: printk: legacy console [ttyAMA0] enabled Jan 14 23:44:18.519344 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 14 23:44:18.519498 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 14 23:44:18.519588 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 14 23:44:18.519670 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 14 23:44:18.519750 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 14 23:44:18.519829 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 14 23:44:18.519839 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 14 23:44:18.519847 kernel: PCI host bridge to bus 0000:00 Jan 14 23:44:18.519934 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 14 23:44:18.520009 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 14 23:44:18.520081 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 14 23:44:18.520153 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 14 23:44:18.520251 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jan 14 23:44:18.520444 kernel: pci 0000:00:01.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:44:18.520537 kernel: pci 0000:00:01.0: BAR 0 [mem 0x125a0000-0x125a0fff] Jan 14 23:44:18.520619 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 14 23:44:18.520699 kernel: pci 0000:00:01.0: bridge window [mem 0x12400000-0x124fffff] Jan 14 23:44:18.520779 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Jan 14 23:44:18.520870 kernel: pci 0000:00:01.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:44:18.520955 kernel: pci 0000:00:01.1: BAR 0 [mem 0x1259f000-0x1259ffff] Jan 14 23:44:18.521034 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jan 14 23:44:18.521114 kernel: pci 0000:00:01.1: bridge window [mem 0x12300000-0x123fffff] Jan 14 23:44:18.521200 kernel: pci 0000:00:01.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:44:18.521299 kernel: pci 0000:00:01.2: BAR 0 [mem 0x1259e000-0x1259efff] Jan 14 23:44:18.521389 kernel: pci 0000:00:01.2: PCI bridge to [bus 03] Jan 14 23:44:18.521469 kernel: pci 0000:00:01.2: bridge window [mem 0x12200000-0x122fffff] Jan 14 23:44:18.521548 kernel: pci 0000:00:01.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Jan 14 23:44:18.521635 kernel: pci 0000:00:01.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:44:18.521715 kernel: pci 0000:00:01.3: BAR 0 [mem 0x1259d000-0x1259dfff] Jan 14 23:44:18.521794 kernel: pci 0000:00:01.3: PCI bridge to [bus 04] Jan 14 23:44:18.521876 kernel: pci 0000:00:01.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Jan 14 23:44:18.521963 kernel: pci 0000:00:01.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:44:18.522044 kernel: pci 0000:00:01.4: BAR 0 [mem 0x1259c000-0x1259cfff] Jan 14 23:44:18.522122 kernel: pci 0000:00:01.4: PCI bridge to [bus 05] Jan 14 23:44:18.522202 kernel: pci 0000:00:01.4: bridge window [mem 0x12100000-0x121fffff] Jan 14 23:44:18.522293 kernel: pci 0000:00:01.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Jan 14 23:44:18.522385 kernel: pci 0000:00:01.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:44:18.522464 kernel: pci 0000:00:01.5: BAR 0 [mem 0x1259b000-0x1259bfff] Jan 14 23:44:18.522543 kernel: pci 0000:00:01.5: PCI bridge to [bus 06] Jan 14 23:44:18.522641 kernel: pci 0000:00:01.5: bridge window [mem 0x12000000-0x120fffff] Jan 14 23:44:18.522736 kernel: pci 0000:00:01.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Jan 14 23:44:18.522824 kernel: pci 0000:00:01.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:44:18.522907 kernel: pci 0000:00:01.6: BAR 0 [mem 0x1259a000-0x1259afff] Jan 14 23:44:18.522987 kernel: pci 0000:00:01.6: PCI bridge to [bus 07] Jan 14 23:44:18.523074 kernel: pci 0000:00:01.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:44:18.523153 kernel: pci 0000:00:01.7: BAR 0 [mem 0x12599000-0x12599fff] Jan 14 23:44:18.523233 kernel: pci 0000:00:01.7: PCI bridge to [bus 08] Jan 14 23:44:18.523340 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:44:18.523429 kernel: pci 0000:00:02.0: BAR 0 [mem 0x12598000-0x12598fff] Jan 14 23:44:18.523511 kernel: pci 0000:00:02.0: PCI bridge to [bus 09] Jan 14 23:44:18.523608 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:44:18.523708 kernel: pci 0000:00:02.1: BAR 0 [mem 0x12597000-0x12597fff] Jan 14 23:44:18.523799 kernel: pci 0000:00:02.1: PCI bridge to [bus 0a] Jan 14 23:44:18.523903 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:44:18.523987 kernel: pci 0000:00:02.2: BAR 0 [mem 0x12596000-0x12596fff] Jan 14 23:44:18.524069 kernel: pci 0000:00:02.2: PCI bridge to [bus 0b] Jan 14 23:44:18.524159 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:44:18.524240 kernel: pci 0000:00:02.3: BAR 0 [mem 0x12595000-0x12595fff] Jan 14 23:44:18.524337 kernel: pci 0000:00:02.3: PCI bridge to [bus 0c] Jan 14 23:44:18.524427 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:44:18.524509 kernel: pci 0000:00:02.4: BAR 0 [mem 0x12594000-0x12594fff] Jan 14 23:44:18.524590 kernel: pci 0000:00:02.4: PCI bridge to [bus 0d] Jan 14 23:44:18.524677 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:44:18.524760 kernel: pci 0000:00:02.5: BAR 0 [mem 0x12593000-0x12593fff] Jan 14 23:44:18.524843 kernel: pci 0000:00:02.5: PCI bridge to [bus 0e] Jan 14 23:44:18.524931 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:44:18.525014 kernel: pci 0000:00:02.6: BAR 0 [mem 0x12592000-0x12592fff] Jan 14 23:44:18.525113 kernel: pci 0000:00:02.6: PCI bridge to [bus 0f] Jan 14 23:44:18.525202 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:44:18.525295 kernel: pci 0000:00:02.7: BAR 0 [mem 0x12591000-0x12591fff] Jan 14 23:44:18.525379 kernel: pci 0000:00:02.7: PCI bridge to [bus 10] Jan 14 23:44:18.525464 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:44:18.525543 kernel: pci 0000:00:03.0: BAR 0 [mem 0x12590000-0x12590fff] Jan 14 23:44:18.525635 kernel: pci 0000:00:03.0: PCI bridge to [bus 11] Jan 14 23:44:18.525720 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:44:18.525804 kernel: pci 0000:00:03.1: BAR 0 [mem 0x1258f000-0x1258ffff] Jan 14 23:44:18.525883 kernel: pci 0000:00:03.1: PCI bridge to [bus 12] Jan 14 23:44:18.525963 kernel: pci 0000:00:03.1: bridge window [io 0xf000-0xffff] Jan 14 23:44:18.526049 kernel: pci 0000:00:03.1: bridge window [mem 0x11e00000-0x11ffffff] Jan 14 23:44:18.526142 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:44:18.526223 kernel: pci 0000:00:03.2: BAR 0 [mem 0x1258e000-0x1258efff] Jan 14 23:44:18.526318 kernel: pci 0000:00:03.2: PCI bridge to [bus 13] Jan 14 23:44:18.526402 kernel: pci 0000:00:03.2: bridge window [io 0xe000-0xefff] Jan 14 23:44:18.526483 kernel: pci 0000:00:03.2: bridge window [mem 0x11c00000-0x11dfffff] Jan 14 23:44:18.526616 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:44:18.526705 kernel: pci 0000:00:03.3: BAR 0 [mem 0x1258d000-0x1258dfff] Jan 14 23:44:18.526784 kernel: pci 0000:00:03.3: PCI bridge to [bus 14] Jan 14 23:44:18.526867 kernel: pci 0000:00:03.3: bridge window [io 0xd000-0xdfff] Jan 14 23:44:18.526946 kernel: pci 0000:00:03.3: bridge window [mem 0x11a00000-0x11bfffff] Jan 14 23:44:18.527031 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:44:18.527111 kernel: pci 0000:00:03.4: BAR 0 [mem 0x1258c000-0x1258cfff] Jan 14 23:44:18.527192 kernel: pci 0000:00:03.4: PCI bridge to [bus 15] Jan 14 23:44:18.527286 kernel: pci 0000:00:03.4: bridge window [io 0xc000-0xcfff] Jan 14 23:44:18.527379 kernel: pci 0000:00:03.4: bridge window [mem 0x11800000-0x119fffff] Jan 14 23:44:18.527466 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:44:18.527568 kernel: pci 0000:00:03.5: BAR 0 [mem 0x1258b000-0x1258bfff] Jan 14 23:44:18.527674 kernel: pci 0000:00:03.5: PCI bridge to [bus 16] Jan 14 23:44:18.527756 kernel: pci 0000:00:03.5: bridge window [io 0xb000-0xbfff] Jan 14 23:44:18.527835 kernel: pci 0000:00:03.5: bridge window [mem 0x11600000-0x117fffff] Jan 14 23:44:18.527925 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:44:18.528005 kernel: pci 0000:00:03.6: BAR 0 [mem 0x1258a000-0x1258afff] Jan 14 23:44:18.528086 kernel: pci 0000:00:03.6: PCI bridge to [bus 17] Jan 14 23:44:18.528173 kernel: pci 0000:00:03.6: bridge window [io 0xa000-0xafff] Jan 14 23:44:18.528253 kernel: pci 0000:00:03.6: bridge window [mem 0x11400000-0x115fffff] Jan 14 23:44:18.528353 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:44:18.528441 kernel: pci 0000:00:03.7: BAR 0 [mem 0x12589000-0x12589fff] Jan 14 23:44:18.528523 kernel: pci 0000:00:03.7: PCI bridge to [bus 18] Jan 14 23:44:18.528604 kernel: pci 0000:00:03.7: bridge window [io 0x9000-0x9fff] Jan 14 23:44:18.529445 kernel: pci 0000:00:03.7: bridge window [mem 0x11200000-0x113fffff] Jan 14 23:44:18.529562 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:44:18.529655 kernel: pci 0000:00:04.0: BAR 0 [mem 0x12588000-0x12588fff] Jan 14 23:44:18.529747 kernel: pci 0000:00:04.0: PCI bridge to [bus 19] Jan 14 23:44:18.529830 kernel: pci 0000:00:04.0: bridge window [io 0x8000-0x8fff] Jan 14 23:44:18.529911 kernel: pci 0000:00:04.0: bridge window [mem 0x11000000-0x111fffff] Jan 14 23:44:18.530001 kernel: pci 0000:00:04.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:44:18.530082 kernel: pci 0000:00:04.1: BAR 0 [mem 0x12587000-0x12587fff] Jan 14 23:44:18.530165 kernel: pci 0000:00:04.1: PCI bridge to [bus 1a] Jan 14 23:44:18.530245 kernel: pci 0000:00:04.1: bridge window [io 0x7000-0x7fff] Jan 14 23:44:18.530343 kernel: pci 0000:00:04.1: bridge window [mem 0x10e00000-0x10ffffff] Jan 14 23:44:18.530436 kernel: pci 0000:00:04.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:44:18.530518 kernel: pci 0000:00:04.2: BAR 0 [mem 0x12586000-0x12586fff] Jan 14 23:44:18.530625 kernel: pci 0000:00:04.2: PCI bridge to [bus 1b] Jan 14 23:44:18.530716 kernel: pci 0000:00:04.2: bridge window [io 0x6000-0x6fff] Jan 14 23:44:18.530808 kernel: pci 0000:00:04.2: bridge window [mem 0x10c00000-0x10dfffff] Jan 14 23:44:18.530898 kernel: pci 0000:00:04.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:44:18.530979 kernel: pci 0000:00:04.3: BAR 0 [mem 0x12585000-0x12585fff] Jan 14 23:44:18.531059 kernel: pci 0000:00:04.3: PCI bridge to [bus 1c] Jan 14 23:44:18.531138 kernel: pci 0000:00:04.3: bridge window [io 0x5000-0x5fff] Jan 14 23:44:18.531217 kernel: pci 0000:00:04.3: bridge window [mem 0x10a00000-0x10bfffff] Jan 14 23:44:18.531320 kernel: pci 0000:00:04.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:44:18.531403 kernel: pci 0000:00:04.4: BAR 0 [mem 0x12584000-0x12584fff] Jan 14 23:44:18.531482 kernel: pci 0000:00:04.4: PCI bridge to [bus 1d] Jan 14 23:44:18.531563 kernel: pci 0000:00:04.4: bridge window [io 0x4000-0x4fff] Jan 14 23:44:18.531644 kernel: pci 0000:00:04.4: bridge window [mem 0x10800000-0x109fffff] Jan 14 23:44:18.531731 kernel: pci 0000:00:04.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:44:18.531811 kernel: pci 0000:00:04.5: BAR 0 [mem 0x12583000-0x12583fff] Jan 14 23:44:18.531889 kernel: pci 0000:00:04.5: PCI bridge to [bus 1e] Jan 14 23:44:18.531969 kernel: pci 0000:00:04.5: bridge window [io 0x3000-0x3fff] Jan 14 23:44:18.532050 kernel: pci 0000:00:04.5: bridge window [mem 0x10600000-0x107fffff] Jan 14 23:44:18.532135 kernel: pci 0000:00:04.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:44:18.532216 kernel: pci 0000:00:04.6: BAR 0 [mem 0x12582000-0x12582fff] Jan 14 23:44:18.532306 kernel: pci 0000:00:04.6: PCI bridge to [bus 1f] Jan 14 23:44:18.532387 kernel: pci 0000:00:04.6: bridge window [io 0x2000-0x2fff] Jan 14 23:44:18.532465 kernel: pci 0000:00:04.6: bridge window [mem 0x10400000-0x105fffff] Jan 14 23:44:18.532555 kernel: pci 0000:00:04.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:44:18.532636 kernel: pci 0000:00:04.7: BAR 0 [mem 0x12581000-0x12581fff] Jan 14 23:44:18.532717 kernel: pci 0000:00:04.7: PCI bridge to [bus 20] Jan 14 23:44:18.532796 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x1fff] Jan 14 23:44:18.532876 kernel: pci 0000:00:04.7: bridge window [mem 0x10200000-0x103fffff] Jan 14 23:44:18.532967 kernel: pci 0000:00:05.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 14 23:44:18.533050 kernel: pci 0000:00:05.0: BAR 0 [mem 0x12580000-0x12580fff] Jan 14 23:44:18.533129 kernel: pci 0000:00:05.0: PCI bridge to [bus 21] Jan 14 23:44:18.533208 kernel: pci 0000:00:05.0: bridge window [io 0x0000-0x0fff] Jan 14 23:44:18.533353 kernel: pci 0000:00:05.0: bridge window [mem 0x10000000-0x101fffff] Jan 14 23:44:18.533449 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 14 23:44:18.533536 kernel: pci 0000:01:00.0: BAR 1 [mem 0x12400000-0x12400fff] Jan 14 23:44:18.533619 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jan 14 23:44:18.533700 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 14 23:44:18.533790 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 14 23:44:18.533874 kernel: pci 0000:02:00.0: BAR 0 [mem 0x12300000-0x12303fff 64bit] Jan 14 23:44:18.533962 kernel: pci 0000:03:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint Jan 14 23:44:18.534046 kernel: pci 0000:03:00.0: BAR 1 [mem 0x12200000-0x12200fff] Jan 14 23:44:18.534128 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Jan 14 23:44:18.534225 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Jan 14 23:44:18.534322 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Jan 14 23:44:18.534415 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 14 23:44:18.534507 kernel: pci 0000:05:00.0: BAR 1 [mem 0x12100000-0x12100fff] Jan 14 23:44:18.534611 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Jan 14 23:44:18.534704 kernel: pci 0000:06:00.0: [1af4:1050] type 00 class 0x038000 PCIe Endpoint Jan 14 23:44:18.534787 kernel: pci 0000:06:00.0: BAR 1 [mem 0x12000000-0x12000fff] Jan 14 23:44:18.534869 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Jan 14 23:44:18.534951 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 14 23:44:18.535037 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 14 23:44:18.535117 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 14 23:44:18.535201 kernel: pci 0000:00:01.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 14 23:44:18.535304 kernel: pci 0000:00:01.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 14 23:44:18.535393 kernel: pci 0000:00:01.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 14 23:44:18.535476 kernel: pci 0000:00:01.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 14 23:44:18.535561 kernel: pci 0000:00:01.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 14 23:44:18.535642 kernel: pci 0000:00:01.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 14 23:44:18.535727 kernel: pci 0000:00:01.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 14 23:44:18.535812 kernel: pci 0000:00:01.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 14 23:44:18.535892 kernel: pci 0000:00:01.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 14 23:44:18.535975 kernel: pci 0000:00:01.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 14 23:44:18.536055 kernel: pci 0000:00:01.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 14 23:44:18.536133 kernel: pci 0000:00:01.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 14 23:44:18.536216 kernel: pci 0000:00:01.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 14 23:44:18.536309 kernel: pci 0000:00:01.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 14 23:44:18.536394 kernel: pci 0000:00:01.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 14 23:44:18.536477 kernel: pci 0000:00:01.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 14 23:44:18.536557 kernel: pci 0000:00:01.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 07] add_size 200000 add_align 100000 Jan 14 23:44:18.536636 kernel: pci 0000:00:01.6: bridge window [mem 0x00100000-0x000fffff] to [bus 07] add_size 200000 add_align 100000 Jan 14 23:44:18.536720 kernel: pci 0000:00:01.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 14 23:44:18.536802 kernel: pci 0000:00:01.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 14 23:44:18.536881 kernel: pci 0000:00:01.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 14 23:44:18.536964 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 14 23:44:18.537043 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 14 23:44:18.537122 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 14 23:44:18.537207 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jan 14 23:44:18.537296 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0a] add_size 200000 add_align 100000 Jan 14 23:44:18.537377 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff] to [bus 0a] add_size 200000 add_align 100000 Jan 14 23:44:18.537461 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 0b] add_size 1000 Jan 14 23:44:18.537546 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jan 14 23:44:18.537626 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x000fffff] to [bus 0b] add_size 200000 add_align 100000 Jan 14 23:44:18.537713 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 0c] add_size 1000 Jan 14 23:44:18.537793 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0c] add_size 200000 add_align 100000 Jan 14 23:44:18.537872 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 0c] add_size 200000 add_align 100000 Jan 14 23:44:18.537954 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 0d] add_size 1000 Jan 14 23:44:18.538034 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0d] add_size 200000 add_align 100000 Jan 14 23:44:18.538113 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 0d] add_size 200000 add_align 100000 Jan 14 23:44:18.538199 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jan 14 23:44:18.538287 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0e] add_size 200000 add_align 100000 Jan 14 23:44:18.538368 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x000fffff] to [bus 0e] add_size 200000 add_align 100000 Jan 14 23:44:18.538451 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jan 14 23:44:18.538530 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0f] add_size 200000 add_align 100000 Jan 14 23:44:18.538629 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x000fffff] to [bus 0f] add_size 200000 add_align 100000 Jan 14 23:44:18.538715 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jan 14 23:44:18.538795 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 10] add_size 200000 add_align 100000 Jan 14 23:44:18.538874 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 10] add_size 200000 add_align 100000 Jan 14 23:44:18.538959 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jan 14 23:44:18.539038 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 11] add_size 200000 add_align 100000 Jan 14 23:44:18.539120 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 11] add_size 200000 add_align 100000 Jan 14 23:44:18.539203 kernel: pci 0000:00:03.1: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jan 14 23:44:18.539301 kernel: pci 0000:00:03.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 12] add_size 200000 add_align 100000 Jan 14 23:44:18.539390 kernel: pci 0000:00:03.1: bridge window [mem 0x00100000-0x000fffff] to [bus 12] add_size 200000 add_align 100000 Jan 14 23:44:18.539473 kernel: pci 0000:00:03.2: bridge window [io 0x1000-0x0fff] to [bus 13] add_size 1000 Jan 14 23:44:18.539557 kernel: pci 0000:00:03.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 13] add_size 200000 add_align 100000 Jan 14 23:44:18.539637 kernel: pci 0000:00:03.2: bridge window [mem 0x00100000-0x000fffff] to [bus 13] add_size 200000 add_align 100000 Jan 14 23:44:18.539720 kernel: pci 0000:00:03.3: bridge window [io 0x1000-0x0fff] to [bus 14] add_size 1000 Jan 14 23:44:18.539805 kernel: pci 0000:00:03.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 14] add_size 200000 add_align 100000 Jan 14 23:44:18.539888 kernel: pci 0000:00:03.3: bridge window [mem 0x00100000-0x000fffff] to [bus 14] add_size 200000 add_align 100000 Jan 14 23:44:18.539970 kernel: pci 0000:00:03.4: bridge window [io 0x1000-0x0fff] to [bus 15] add_size 1000 Jan 14 23:44:18.540054 kernel: pci 0000:00:03.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 15] add_size 200000 add_align 100000 Jan 14 23:44:18.540133 kernel: pci 0000:00:03.4: bridge window [mem 0x00100000-0x000fffff] to [bus 15] add_size 200000 add_align 100000 Jan 14 23:44:18.540218 kernel: pci 0000:00:03.5: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jan 14 23:44:18.540311 kernel: pci 0000:00:03.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 16] add_size 200000 add_align 100000 Jan 14 23:44:18.540398 kernel: pci 0000:00:03.5: bridge window [mem 0x00100000-0x000fffff] to [bus 16] add_size 200000 add_align 100000 Jan 14 23:44:18.540484 kernel: pci 0000:00:03.6: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jan 14 23:44:18.540565 kernel: pci 0000:00:03.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 17] add_size 200000 add_align 100000 Jan 14 23:44:18.540646 kernel: pci 0000:00:03.6: bridge window [mem 0x00100000-0x000fffff] to [bus 17] add_size 200000 add_align 100000 Jan 14 23:44:18.540727 kernel: pci 0000:00:03.7: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jan 14 23:44:18.540808 kernel: pci 0000:00:03.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 18] add_size 200000 add_align 100000 Jan 14 23:44:18.540889 kernel: pci 0000:00:03.7: bridge window [mem 0x00100000-0x000fffff] to [bus 18] add_size 200000 add_align 100000 Jan 14 23:44:18.540975 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jan 14 23:44:18.541056 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 19] add_size 200000 add_align 100000 Jan 14 23:44:18.541137 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 19] add_size 200000 add_align 100000 Jan 14 23:44:18.541219 kernel: pci 0000:00:04.1: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jan 14 23:44:18.541308 kernel: pci 0000:00:04.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1a] add_size 200000 add_align 100000 Jan 14 23:44:18.541388 kernel: pci 0000:00:04.1: bridge window [mem 0x00100000-0x000fffff] to [bus 1a] add_size 200000 add_align 100000 Jan 14 23:44:18.541472 kernel: pci 0000:00:04.2: bridge window [io 0x1000-0x0fff] to [bus 1b] add_size 1000 Jan 14 23:44:18.541553 kernel: pci 0000:00:04.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1b] add_size 200000 add_align 100000 Jan 14 23:44:18.541633 kernel: pci 0000:00:04.2: bridge window [mem 0x00100000-0x000fffff] to [bus 1b] add_size 200000 add_align 100000 Jan 14 23:44:18.541716 kernel: pci 0000:00:04.3: bridge window [io 0x1000-0x0fff] to [bus 1c] add_size 1000 Jan 14 23:44:18.541796 kernel: pci 0000:00:04.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1c] add_size 200000 add_align 100000 Jan 14 23:44:18.541877 kernel: pci 0000:00:04.3: bridge window [mem 0x00100000-0x000fffff] to [bus 1c] add_size 200000 add_align 100000 Jan 14 23:44:18.541960 kernel: pci 0000:00:04.4: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jan 14 23:44:18.542040 kernel: pci 0000:00:04.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1d] add_size 200000 add_align 100000 Jan 14 23:44:18.542120 kernel: pci 0000:00:04.4: bridge window [mem 0x00100000-0x000fffff] to [bus 1d] add_size 200000 add_align 100000 Jan 14 23:44:18.542204 kernel: pci 0000:00:04.5: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jan 14 23:44:18.542291 kernel: pci 0000:00:04.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1e] add_size 200000 add_align 100000 Jan 14 23:44:18.542373 kernel: pci 0000:00:04.5: bridge window [mem 0x00100000-0x000fffff] to [bus 1e] add_size 200000 add_align 100000 Jan 14 23:44:18.542457 kernel: pci 0000:00:04.6: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jan 14 23:44:18.542538 kernel: pci 0000:00:04.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1f] add_size 200000 add_align 100000 Jan 14 23:44:18.542635 kernel: pci 0000:00:04.6: bridge window [mem 0x00100000-0x000fffff] to [bus 1f] add_size 200000 add_align 100000 Jan 14 23:44:18.542722 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jan 14 23:44:18.542804 kernel: pci 0000:00:04.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 20] add_size 200000 add_align 100000 Jan 14 23:44:18.542886 kernel: pci 0000:00:04.7: bridge window [mem 0x00100000-0x000fffff] to [bus 20] add_size 200000 add_align 100000 Jan 14 23:44:18.542968 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jan 14 23:44:18.543047 kernel: pci 0000:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 21] add_size 200000 add_align 100000 Jan 14 23:44:18.543127 kernel: pci 0000:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 21] add_size 200000 add_align 100000 Jan 14 23:44:18.543208 kernel: pci 0000:00:01.0: bridge window [mem 0x10000000-0x101fffff]: assigned Jan 14 23:44:18.543310 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Jan 14 23:44:18.543400 kernel: pci 0000:00:01.1: bridge window [mem 0x10200000-0x103fffff]: assigned Jan 14 23:44:18.543480 kernel: pci 0000:00:01.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Jan 14 23:44:18.543562 kernel: pci 0000:00:01.2: bridge window [mem 0x10400000-0x105fffff]: assigned Jan 14 23:44:18.543643 kernel: pci 0000:00:01.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Jan 14 23:44:18.543725 kernel: pci 0000:00:01.3: bridge window [mem 0x10600000-0x107fffff]: assigned Jan 14 23:44:18.543810 kernel: pci 0000:00:01.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Jan 14 23:44:18.543894 kernel: pci 0000:00:01.4: bridge window [mem 0x10800000-0x109fffff]: assigned Jan 14 23:44:18.543974 kernel: pci 0000:00:01.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Jan 14 23:44:18.544057 kernel: pci 0000:00:01.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Jan 14 23:44:18.544137 kernel: pci 0000:00:01.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Jan 14 23:44:18.544219 kernel: pci 0000:00:01.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Jan 14 23:44:18.544313 kernel: pci 0000:00:01.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Jan 14 23:44:18.544403 kernel: pci 0000:00:01.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Jan 14 23:44:18.544486 kernel: pci 0000:00:01.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Jan 14 23:44:18.544569 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff]: assigned Jan 14 23:44:18.544649 kernel: pci 0000:00:02.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Jan 14 23:44:18.544731 kernel: pci 0000:00:02.1: bridge window [mem 0x11200000-0x113fffff]: assigned Jan 14 23:44:18.544809 kernel: pci 0000:00:02.1: bridge window [mem 0x8001200000-0x80013fffff 64bit pref]: assigned Jan 14 23:44:18.544892 kernel: pci 0000:00:02.2: bridge window [mem 0x11400000-0x115fffff]: assigned Jan 14 23:44:18.544972 kernel: pci 0000:00:02.2: bridge window [mem 0x8001400000-0x80015fffff 64bit pref]: assigned Jan 14 23:44:18.545054 kernel: pci 0000:00:02.3: bridge window [mem 0x11600000-0x117fffff]: assigned Jan 14 23:44:18.545134 kernel: pci 0000:00:02.3: bridge window [mem 0x8001600000-0x80017fffff 64bit pref]: assigned Jan 14 23:44:18.545215 kernel: pci 0000:00:02.4: bridge window [mem 0x11800000-0x119fffff]: assigned Jan 14 23:44:18.545306 kernel: pci 0000:00:02.4: bridge window [mem 0x8001800000-0x80019fffff 64bit pref]: assigned Jan 14 23:44:18.545391 kernel: pci 0000:00:02.5: bridge window [mem 0x11a00000-0x11bfffff]: assigned Jan 14 23:44:18.545473 kernel: pci 0000:00:02.5: bridge window [mem 0x8001a00000-0x8001bfffff 64bit pref]: assigned Jan 14 23:44:18.545555 kernel: pci 0000:00:02.6: bridge window [mem 0x11c00000-0x11dfffff]: assigned Jan 14 23:44:18.545633 kernel: pci 0000:00:02.6: bridge window [mem 0x8001c00000-0x8001dfffff 64bit pref]: assigned Jan 14 23:44:18.545715 kernel: pci 0000:00:02.7: bridge window [mem 0x11e00000-0x11ffffff]: assigned Jan 14 23:44:18.545794 kernel: pci 0000:00:02.7: bridge window [mem 0x8001e00000-0x8001ffffff 64bit pref]: assigned Jan 14 23:44:18.545875 kernel: pci 0000:00:03.0: bridge window [mem 0x12000000-0x121fffff]: assigned Jan 14 23:44:18.545956 kernel: pci 0000:00:03.0: bridge window [mem 0x8002000000-0x80021fffff 64bit pref]: assigned Jan 14 23:44:18.546037 kernel: pci 0000:00:03.1: bridge window [mem 0x12200000-0x123fffff]: assigned Jan 14 23:44:18.546117 kernel: pci 0000:00:03.1: bridge window [mem 0x8002200000-0x80023fffff 64bit pref]: assigned Jan 14 23:44:18.546198 kernel: pci 0000:00:03.2: bridge window [mem 0x12400000-0x125fffff]: assigned Jan 14 23:44:18.546286 kernel: pci 0000:00:03.2: bridge window [mem 0x8002400000-0x80025fffff 64bit pref]: assigned Jan 14 23:44:18.546370 kernel: pci 0000:00:03.3: bridge window [mem 0x12600000-0x127fffff]: assigned Jan 14 23:44:18.546452 kernel: pci 0000:00:03.3: bridge window [mem 0x8002600000-0x80027fffff 64bit pref]: assigned Jan 14 23:44:18.546534 kernel: pci 0000:00:03.4: bridge window [mem 0x12800000-0x129fffff]: assigned Jan 14 23:44:18.546631 kernel: pci 0000:00:03.4: bridge window [mem 0x8002800000-0x80029fffff 64bit pref]: assigned Jan 14 23:44:18.546717 kernel: pci 0000:00:03.5: bridge window [mem 0x12a00000-0x12bfffff]: assigned Jan 14 23:44:18.546798 kernel: pci 0000:00:03.5: bridge window [mem 0x8002a00000-0x8002bfffff 64bit pref]: assigned Jan 14 23:44:18.546881 kernel: pci 0000:00:03.6: bridge window [mem 0x12c00000-0x12dfffff]: assigned Jan 14 23:44:18.546962 kernel: pci 0000:00:03.6: bridge window [mem 0x8002c00000-0x8002dfffff 64bit pref]: assigned Jan 14 23:44:18.547048 kernel: pci 0000:00:03.7: bridge window [mem 0x12e00000-0x12ffffff]: assigned Jan 14 23:44:18.547130 kernel: pci 0000:00:03.7: bridge window [mem 0x8002e00000-0x8002ffffff 64bit pref]: assigned Jan 14 23:44:18.547212 kernel: pci 0000:00:04.0: bridge window [mem 0x13000000-0x131fffff]: assigned Jan 14 23:44:18.547304 kernel: pci 0000:00:04.0: bridge window [mem 0x8003000000-0x80031fffff 64bit pref]: assigned Jan 14 23:44:18.547389 kernel: pci 0000:00:04.1: bridge window [mem 0x13200000-0x133fffff]: assigned Jan 14 23:44:18.547469 kernel: pci 0000:00:04.1: bridge window [mem 0x8003200000-0x80033fffff 64bit pref]: assigned Jan 14 23:44:18.547553 kernel: pci 0000:00:04.2: bridge window [mem 0x13400000-0x135fffff]: assigned Jan 14 23:44:18.547633 kernel: pci 0000:00:04.2: bridge window [mem 0x8003400000-0x80035fffff 64bit pref]: assigned Jan 14 23:44:18.547714 kernel: pci 0000:00:04.3: bridge window [mem 0x13600000-0x137fffff]: assigned Jan 14 23:44:18.547793 kernel: pci 0000:00:04.3: bridge window [mem 0x8003600000-0x80037fffff 64bit pref]: assigned Jan 14 23:44:18.547873 kernel: pci 0000:00:04.4: bridge window [mem 0x13800000-0x139fffff]: assigned Jan 14 23:44:18.547953 kernel: pci 0000:00:04.4: bridge window [mem 0x8003800000-0x80039fffff 64bit pref]: assigned Jan 14 23:44:18.548035 kernel: pci 0000:00:04.5: bridge window [mem 0x13a00000-0x13bfffff]: assigned Jan 14 23:44:18.548115 kernel: pci 0000:00:04.5: bridge window [mem 0x8003a00000-0x8003bfffff 64bit pref]: assigned Jan 14 23:44:18.548195 kernel: pci 0000:00:04.6: bridge window [mem 0x13c00000-0x13dfffff]: assigned Jan 14 23:44:18.548292 kernel: pci 0000:00:04.6: bridge window [mem 0x8003c00000-0x8003dfffff 64bit pref]: assigned Jan 14 23:44:18.548375 kernel: pci 0000:00:04.7: bridge window [mem 0x13e00000-0x13ffffff]: assigned Jan 14 23:44:18.548455 kernel: pci 0000:00:04.7: bridge window [mem 0x8003e00000-0x8003ffffff 64bit pref]: assigned Jan 14 23:44:18.548535 kernel: pci 0000:00:05.0: bridge window [mem 0x14000000-0x141fffff]: assigned Jan 14 23:44:18.548617 kernel: pci 0000:00:05.0: bridge window [mem 0x8004000000-0x80041fffff 64bit pref]: assigned Jan 14 23:44:18.548698 kernel: pci 0000:00:01.0: BAR 0 [mem 0x14200000-0x14200fff]: assigned Jan 14 23:44:18.548778 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x1fff]: assigned Jan 14 23:44:18.548859 kernel: pci 0000:00:01.1: BAR 0 [mem 0x14201000-0x14201fff]: assigned Jan 14 23:44:18.548938 kernel: pci 0000:00:01.1: bridge window [io 0x2000-0x2fff]: assigned Jan 14 23:44:18.549017 kernel: pci 0000:00:01.2: BAR 0 [mem 0x14202000-0x14202fff]: assigned Jan 14 23:44:18.549098 kernel: pci 0000:00:01.2: bridge window [io 0x3000-0x3fff]: assigned Jan 14 23:44:18.549178 kernel: pci 0000:00:01.3: BAR 0 [mem 0x14203000-0x14203fff]: assigned Jan 14 23:44:18.549257 kernel: pci 0000:00:01.3: bridge window [io 0x4000-0x4fff]: assigned Jan 14 23:44:18.549348 kernel: pci 0000:00:01.4: BAR 0 [mem 0x14204000-0x14204fff]: assigned Jan 14 23:44:18.549430 kernel: pci 0000:00:01.4: bridge window [io 0x5000-0x5fff]: assigned Jan 14 23:44:18.549516 kernel: pci 0000:00:01.5: BAR 0 [mem 0x14205000-0x14205fff]: assigned Jan 14 23:44:18.549607 kernel: pci 0000:00:01.5: bridge window [io 0x6000-0x6fff]: assigned Jan 14 23:44:18.549689 kernel: pci 0000:00:01.6: BAR 0 [mem 0x14206000-0x14206fff]: assigned Jan 14 23:44:18.549769 kernel: pci 0000:00:01.6: bridge window [io 0x7000-0x7fff]: assigned Jan 14 23:44:18.549849 kernel: pci 0000:00:01.7: BAR 0 [mem 0x14207000-0x14207fff]: assigned Jan 14 23:44:18.549929 kernel: pci 0000:00:01.7: bridge window [io 0x8000-0x8fff]: assigned Jan 14 23:44:18.550009 kernel: pci 0000:00:02.0: BAR 0 [mem 0x14208000-0x14208fff]: assigned Jan 14 23:44:18.550089 kernel: pci 0000:00:02.0: bridge window [io 0x9000-0x9fff]: assigned Jan 14 23:44:18.550172 kernel: pci 0000:00:02.1: BAR 0 [mem 0x14209000-0x14209fff]: assigned Jan 14 23:44:18.550250 kernel: pci 0000:00:02.1: bridge window [io 0xa000-0xafff]: assigned Jan 14 23:44:18.550350 kernel: pci 0000:00:02.2: BAR 0 [mem 0x1420a000-0x1420afff]: assigned Jan 14 23:44:18.550434 kernel: pci 0000:00:02.2: bridge window [io 0xb000-0xbfff]: assigned Jan 14 23:44:18.550514 kernel: pci 0000:00:02.3: BAR 0 [mem 0x1420b000-0x1420bfff]: assigned Jan 14 23:44:18.550620 kernel: pci 0000:00:02.3: bridge window [io 0xc000-0xcfff]: assigned Jan 14 23:44:18.550713 kernel: pci 0000:00:02.4: BAR 0 [mem 0x1420c000-0x1420cfff]: assigned Jan 14 23:44:18.550792 kernel: pci 0000:00:02.4: bridge window [io 0xd000-0xdfff]: assigned Jan 14 23:44:18.550873 kernel: pci 0000:00:02.5: BAR 0 [mem 0x1420d000-0x1420dfff]: assigned Jan 14 23:44:18.550953 kernel: pci 0000:00:02.5: bridge window [io 0xe000-0xefff]: assigned Jan 14 23:44:18.551034 kernel: pci 0000:00:02.6: BAR 0 [mem 0x1420e000-0x1420efff]: assigned Jan 14 23:44:18.551116 kernel: pci 0000:00:02.6: bridge window [io 0xf000-0xffff]: assigned Jan 14 23:44:18.551197 kernel: pci 0000:00:02.7: BAR 0 [mem 0x1420f000-0x1420ffff]: assigned Jan 14 23:44:18.551297 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.551385 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.551467 kernel: pci 0000:00:03.0: BAR 0 [mem 0x14210000-0x14210fff]: assigned Jan 14 23:44:18.551547 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.551630 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.551712 kernel: pci 0000:00:03.1: BAR 0 [mem 0x14211000-0x14211fff]: assigned Jan 14 23:44:18.551792 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.551872 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.551954 kernel: pci 0000:00:03.2: BAR 0 [mem 0x14212000-0x14212fff]: assigned Jan 14 23:44:18.552035 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.552117 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.552200 kernel: pci 0000:00:03.3: BAR 0 [mem 0x14213000-0x14213fff]: assigned Jan 14 23:44:18.552296 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.552380 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.552462 kernel: pci 0000:00:03.4: BAR 0 [mem 0x14214000-0x14214fff]: assigned Jan 14 23:44:18.552542 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.552623 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.552706 kernel: pci 0000:00:03.5: BAR 0 [mem 0x14215000-0x14215fff]: assigned Jan 14 23:44:18.552786 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.552865 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.552945 kernel: pci 0000:00:03.6: BAR 0 [mem 0x14216000-0x14216fff]: assigned Jan 14 23:44:18.553025 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.553105 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.553189 kernel: pci 0000:00:03.7: BAR 0 [mem 0x14217000-0x14217fff]: assigned Jan 14 23:44:18.553279 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.553363 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.553444 kernel: pci 0000:00:04.0: BAR 0 [mem 0x14218000-0x14218fff]: assigned Jan 14 23:44:18.553524 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.553603 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.553688 kernel: pci 0000:00:04.1: BAR 0 [mem 0x14219000-0x14219fff]: assigned Jan 14 23:44:18.553768 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.553850 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.553932 kernel: pci 0000:00:04.2: BAR 0 [mem 0x1421a000-0x1421afff]: assigned Jan 14 23:44:18.554013 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.554093 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.554173 kernel: pci 0000:00:04.3: BAR 0 [mem 0x1421b000-0x1421bfff]: assigned Jan 14 23:44:18.554255 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.554345 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.554428 kernel: pci 0000:00:04.4: BAR 0 [mem 0x1421c000-0x1421cfff]: assigned Jan 14 23:44:18.554507 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.554601 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.554690 kernel: pci 0000:00:04.5: BAR 0 [mem 0x1421d000-0x1421dfff]: assigned Jan 14 23:44:18.554774 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.554855 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.554938 kernel: pci 0000:00:04.6: BAR 0 [mem 0x1421e000-0x1421efff]: assigned Jan 14 23:44:18.555023 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.555104 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.555186 kernel: pci 0000:00:04.7: BAR 0 [mem 0x1421f000-0x1421ffff]: assigned Jan 14 23:44:18.555279 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.555372 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.555455 kernel: pci 0000:00:05.0: BAR 0 [mem 0x14220000-0x14220fff]: assigned Jan 14 23:44:18.555538 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.555618 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.555701 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x1fff]: assigned Jan 14 23:44:18.555781 kernel: pci 0000:00:04.7: bridge window [io 0x2000-0x2fff]: assigned Jan 14 23:44:18.555865 kernel: pci 0000:00:04.6: bridge window [io 0x3000-0x3fff]: assigned Jan 14 23:44:18.555946 kernel: pci 0000:00:04.5: bridge window [io 0x4000-0x4fff]: assigned Jan 14 23:44:18.556028 kernel: pci 0000:00:04.4: bridge window [io 0x5000-0x5fff]: assigned Jan 14 23:44:18.556111 kernel: pci 0000:00:04.3: bridge window [io 0x6000-0x6fff]: assigned Jan 14 23:44:18.556192 kernel: pci 0000:00:04.2: bridge window [io 0x7000-0x7fff]: assigned Jan 14 23:44:18.556281 kernel: pci 0000:00:04.1: bridge window [io 0x8000-0x8fff]: assigned Jan 14 23:44:18.556363 kernel: pci 0000:00:04.0: bridge window [io 0x9000-0x9fff]: assigned Jan 14 23:44:18.556447 kernel: pci 0000:00:03.7: bridge window [io 0xa000-0xafff]: assigned Jan 14 23:44:18.556528 kernel: pci 0000:00:03.6: bridge window [io 0xb000-0xbfff]: assigned Jan 14 23:44:18.556609 kernel: pci 0000:00:03.5: bridge window [io 0xc000-0xcfff]: assigned Jan 14 23:44:18.556689 kernel: pci 0000:00:03.4: bridge window [io 0xd000-0xdfff]: assigned Jan 14 23:44:18.556773 kernel: pci 0000:00:03.3: bridge window [io 0xe000-0xefff]: assigned Jan 14 23:44:18.556854 kernel: pci 0000:00:03.2: bridge window [io 0xf000-0xffff]: assigned Jan 14 23:44:18.556936 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.557017 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.557099 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.557178 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.557260 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.557359 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.557447 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.557528 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.557614 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.557704 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.557784 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.557880 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.557965 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.558045 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.558127 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.558209 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.558302 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.558385 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.558467 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.558546 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.558645 kernel: pci 0000:00:01.7: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.558733 kernel: pci 0000:00:01.7: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.558815 kernel: pci 0000:00:01.6: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.558898 kernel: pci 0000:00:01.6: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.558980 kernel: pci 0000:00:01.5: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.559060 kernel: pci 0000:00:01.5: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.559144 kernel: pci 0000:00:01.4: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.559228 kernel: pci 0000:00:01.4: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.559327 kernel: pci 0000:00:01.3: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.559411 kernel: pci 0000:00:01.3: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.559493 kernel: pci 0000:00:01.2: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.559573 kernel: pci 0000:00:01.2: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.559658 kernel: pci 0000:00:01.1: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.559754 kernel: pci 0000:00:01.1: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.559840 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jan 14 23:44:18.559920 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: failed to assign Jan 14 23:44:18.560019 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Jan 14 23:44:18.560108 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jan 14 23:44:18.560190 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Jan 14 23:44:18.560279 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 14 23:44:18.560364 kernel: pci 0000:00:01.0: bridge window [mem 0x10000000-0x101fffff] Jan 14 23:44:18.560446 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 14 23:44:18.560533 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Jan 14 23:44:18.560615 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jan 14 23:44:18.560700 kernel: pci 0000:00:01.1: bridge window [mem 0x10200000-0x103fffff] Jan 14 23:44:18.560780 kernel: pci 0000:00:01.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 14 23:44:18.560869 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Jan 14 23:44:18.560957 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Jan 14 23:44:18.561038 kernel: pci 0000:00:01.2: PCI bridge to [bus 03] Jan 14 23:44:18.561118 kernel: pci 0000:00:01.2: bridge window [mem 0x10400000-0x105fffff] Jan 14 23:44:18.561199 kernel: pci 0000:00:01.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 14 23:44:18.561296 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Jan 14 23:44:18.561380 kernel: pci 0000:00:01.3: PCI bridge to [bus 04] Jan 14 23:44:18.561460 kernel: pci 0000:00:01.3: bridge window [mem 0x10600000-0x107fffff] Jan 14 23:44:18.561539 kernel: pci 0000:00:01.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 14 23:44:18.561628 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Jan 14 23:44:18.561711 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Jan 14 23:44:18.561791 kernel: pci 0000:00:01.4: PCI bridge to [bus 05] Jan 14 23:44:18.561870 kernel: pci 0000:00:01.4: bridge window [mem 0x10800000-0x109fffff] Jan 14 23:44:18.561949 kernel: pci 0000:00:01.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 14 23:44:18.562035 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Jan 14 23:44:18.562117 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Jan 14 23:44:18.562199 kernel: pci 0000:00:01.5: PCI bridge to [bus 06] Jan 14 23:44:18.562286 kernel: pci 0000:00:01.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 14 23:44:18.562368 kernel: pci 0000:00:01.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 14 23:44:18.562449 kernel: pci 0000:00:01.6: PCI bridge to [bus 07] Jan 14 23:44:18.562529 kernel: pci 0000:00:01.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 14 23:44:18.562627 kernel: pci 0000:00:01.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 14 23:44:18.562714 kernel: pci 0000:00:01.7: PCI bridge to [bus 08] Jan 14 23:44:18.562795 kernel: pci 0000:00:01.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 14 23:44:18.562876 kernel: pci 0000:00:01.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 14 23:44:18.562958 kernel: pci 0000:00:02.0: PCI bridge to [bus 09] Jan 14 23:44:18.563037 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Jan 14 23:44:18.563120 kernel: pci 0000:00:02.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 14 23:44:18.563202 kernel: pci 0000:00:02.1: PCI bridge to [bus 0a] Jan 14 23:44:18.563296 kernel: pci 0000:00:02.1: bridge window [mem 0x11200000-0x113fffff] Jan 14 23:44:18.563382 kernel: pci 0000:00:02.1: bridge window [mem 0x8001200000-0x80013fffff 64bit pref] Jan 14 23:44:18.563466 kernel: pci 0000:00:02.2: PCI bridge to [bus 0b] Jan 14 23:44:18.563548 kernel: pci 0000:00:02.2: bridge window [mem 0x11400000-0x115fffff] Jan 14 23:44:18.563633 kernel: pci 0000:00:02.2: bridge window [mem 0x8001400000-0x80015fffff 64bit pref] Jan 14 23:44:18.563717 kernel: pci 0000:00:02.3: PCI bridge to [bus 0c] Jan 14 23:44:18.563801 kernel: pci 0000:00:02.3: bridge window [mem 0x11600000-0x117fffff] Jan 14 23:44:18.563880 kernel: pci 0000:00:02.3: bridge window [mem 0x8001600000-0x80017fffff 64bit pref] Jan 14 23:44:18.563962 kernel: pci 0000:00:02.4: PCI bridge to [bus 0d] Jan 14 23:44:18.564043 kernel: pci 0000:00:02.4: bridge window [mem 0x11800000-0x119fffff] Jan 14 23:44:18.564124 kernel: pci 0000:00:02.4: bridge window [mem 0x8001800000-0x80019fffff 64bit pref] Jan 14 23:44:18.564205 kernel: pci 0000:00:02.5: PCI bridge to [bus 0e] Jan 14 23:44:18.564300 kernel: pci 0000:00:02.5: bridge window [mem 0x11a00000-0x11bfffff] Jan 14 23:44:18.564385 kernel: pci 0000:00:02.5: bridge window [mem 0x8001a00000-0x8001bfffff 64bit pref] Jan 14 23:44:18.564476 kernel: pci 0000:00:02.6: PCI bridge to [bus 0f] Jan 14 23:44:18.564568 kernel: pci 0000:00:02.6: bridge window [mem 0x11c00000-0x11dfffff] Jan 14 23:44:18.564654 kernel: pci 0000:00:02.6: bridge window [mem 0x8001c00000-0x8001dfffff 64bit pref] Jan 14 23:44:18.564737 kernel: pci 0000:00:02.7: PCI bridge to [bus 10] Jan 14 23:44:18.564819 kernel: pci 0000:00:02.7: bridge window [mem 0x11e00000-0x11ffffff] Jan 14 23:44:18.564899 kernel: pci 0000:00:02.7: bridge window [mem 0x8001e00000-0x8001ffffff 64bit pref] Jan 14 23:44:18.564984 kernel: pci 0000:00:03.0: PCI bridge to [bus 11] Jan 14 23:44:18.565065 kernel: pci 0000:00:03.0: bridge window [mem 0x12000000-0x121fffff] Jan 14 23:44:18.565146 kernel: pci 0000:00:03.0: bridge window [mem 0x8002000000-0x80021fffff 64bit pref] Jan 14 23:44:18.565228 kernel: pci 0000:00:03.1: PCI bridge to [bus 12] Jan 14 23:44:18.565319 kernel: pci 0000:00:03.1: bridge window [mem 0x12200000-0x123fffff] Jan 14 23:44:18.565401 kernel: pci 0000:00:03.1: bridge window [mem 0x8002200000-0x80023fffff 64bit pref] Jan 14 23:44:18.565486 kernel: pci 0000:00:03.2: PCI bridge to [bus 13] Jan 14 23:44:18.565568 kernel: pci 0000:00:03.2: bridge window [io 0xf000-0xffff] Jan 14 23:44:18.565648 kernel: pci 0000:00:03.2: bridge window [mem 0x12400000-0x125fffff] Jan 14 23:44:18.565745 kernel: pci 0000:00:03.2: bridge window [mem 0x8002400000-0x80025fffff 64bit pref] Jan 14 23:44:18.565831 kernel: pci 0000:00:03.3: PCI bridge to [bus 14] Jan 14 23:44:18.565912 kernel: pci 0000:00:03.3: bridge window [io 0xe000-0xefff] Jan 14 23:44:18.565995 kernel: pci 0000:00:03.3: bridge window [mem 0x12600000-0x127fffff] Jan 14 23:44:18.566086 kernel: pci 0000:00:03.3: bridge window [mem 0x8002600000-0x80027fffff 64bit pref] Jan 14 23:44:18.566171 kernel: pci 0000:00:03.4: PCI bridge to [bus 15] Jan 14 23:44:18.566251 kernel: pci 0000:00:03.4: bridge window [io 0xd000-0xdfff] Jan 14 23:44:18.566354 kernel: pci 0000:00:03.4: bridge window [mem 0x12800000-0x129fffff] Jan 14 23:44:18.566436 kernel: pci 0000:00:03.4: bridge window [mem 0x8002800000-0x80029fffff 64bit pref] Jan 14 23:44:18.566518 kernel: pci 0000:00:03.5: PCI bridge to [bus 16] Jan 14 23:44:18.566622 kernel: pci 0000:00:03.5: bridge window [io 0xc000-0xcfff] Jan 14 23:44:18.566710 kernel: pci 0000:00:03.5: bridge window [mem 0x12a00000-0x12bfffff] Jan 14 23:44:18.566790 kernel: pci 0000:00:03.5: bridge window [mem 0x8002a00000-0x8002bfffff 64bit pref] Jan 14 23:44:18.566873 kernel: pci 0000:00:03.6: PCI bridge to [bus 17] Jan 14 23:44:18.566953 kernel: pci 0000:00:03.6: bridge window [io 0xb000-0xbfff] Jan 14 23:44:18.567032 kernel: pci 0000:00:03.6: bridge window [mem 0x12c00000-0x12dfffff] Jan 14 23:44:18.567111 kernel: pci 0000:00:03.6: bridge window [mem 0x8002c00000-0x8002dfffff 64bit pref] Jan 14 23:44:18.567195 kernel: pci 0000:00:03.7: PCI bridge to [bus 18] Jan 14 23:44:18.567287 kernel: pci 0000:00:03.7: bridge window [io 0xa000-0xafff] Jan 14 23:44:18.567371 kernel: pci 0000:00:03.7: bridge window [mem 0x12e00000-0x12ffffff] Jan 14 23:44:18.567451 kernel: pci 0000:00:03.7: bridge window [mem 0x8002e00000-0x8002ffffff 64bit pref] Jan 14 23:44:18.567534 kernel: pci 0000:00:04.0: PCI bridge to [bus 19] Jan 14 23:44:18.567616 kernel: pci 0000:00:04.0: bridge window [io 0x9000-0x9fff] Jan 14 23:44:18.567700 kernel: pci 0000:00:04.0: bridge window [mem 0x13000000-0x131fffff] Jan 14 23:44:18.567780 kernel: pci 0000:00:04.0: bridge window [mem 0x8003000000-0x80031fffff 64bit pref] Jan 14 23:44:18.567864 kernel: pci 0000:00:04.1: PCI bridge to [bus 1a] Jan 14 23:44:18.567944 kernel: pci 0000:00:04.1: bridge window [io 0x8000-0x8fff] Jan 14 23:44:18.568027 kernel: pci 0000:00:04.1: bridge window [mem 0x13200000-0x133fffff] Jan 14 23:44:18.568107 kernel: pci 0000:00:04.1: bridge window [mem 0x8003200000-0x80033fffff 64bit pref] Jan 14 23:44:18.568188 kernel: pci 0000:00:04.2: PCI bridge to [bus 1b] Jan 14 23:44:18.568285 kernel: pci 0000:00:04.2: bridge window [io 0x7000-0x7fff] Jan 14 23:44:18.568376 kernel: pci 0000:00:04.2: bridge window [mem 0x13400000-0x135fffff] Jan 14 23:44:18.568457 kernel: pci 0000:00:04.2: bridge window [mem 0x8003400000-0x80035fffff 64bit pref] Jan 14 23:44:18.568540 kernel: pci 0000:00:04.3: PCI bridge to [bus 1c] Jan 14 23:44:18.568619 kernel: pci 0000:00:04.3: bridge window [io 0x6000-0x6fff] Jan 14 23:44:18.568698 kernel: pci 0000:00:04.3: bridge window [mem 0x13600000-0x137fffff] Jan 14 23:44:18.568777 kernel: pci 0000:00:04.3: bridge window [mem 0x8003600000-0x80037fffff 64bit pref] Jan 14 23:44:18.568862 kernel: pci 0000:00:04.4: PCI bridge to [bus 1d] Jan 14 23:44:18.568943 kernel: pci 0000:00:04.4: bridge window [io 0x5000-0x5fff] Jan 14 23:44:18.569022 kernel: pci 0000:00:04.4: bridge window [mem 0x13800000-0x139fffff] Jan 14 23:44:18.569101 kernel: pci 0000:00:04.4: bridge window [mem 0x8003800000-0x80039fffff 64bit pref] Jan 14 23:44:18.569183 kernel: pci 0000:00:04.5: PCI bridge to [bus 1e] Jan 14 23:44:18.569263 kernel: pci 0000:00:04.5: bridge window [io 0x4000-0x4fff] Jan 14 23:44:18.569362 kernel: pci 0000:00:04.5: bridge window [mem 0x13a00000-0x13bfffff] Jan 14 23:44:18.569443 kernel: pci 0000:00:04.5: bridge window [mem 0x8003a00000-0x8003bfffff 64bit pref] Jan 14 23:44:18.569527 kernel: pci 0000:00:04.6: PCI bridge to [bus 1f] Jan 14 23:44:18.569609 kernel: pci 0000:00:04.6: bridge window [io 0x3000-0x3fff] Jan 14 23:44:18.569688 kernel: pci 0000:00:04.6: bridge window [mem 0x13c00000-0x13dfffff] Jan 14 23:44:18.569767 kernel: pci 0000:00:04.6: bridge window [mem 0x8003c00000-0x8003dfffff 64bit pref] Jan 14 23:44:18.569849 kernel: pci 0000:00:04.7: PCI bridge to [bus 20] Jan 14 23:44:18.569933 kernel: pci 0000:00:04.7: bridge window [io 0x2000-0x2fff] Jan 14 23:44:18.570012 kernel: pci 0000:00:04.7: bridge window [mem 0x13e00000-0x13ffffff] Jan 14 23:44:18.570093 kernel: pci 0000:00:04.7: bridge window [mem 0x8003e00000-0x8003ffffff 64bit pref] Jan 14 23:44:18.570179 kernel: pci 0000:00:05.0: PCI bridge to [bus 21] Jan 14 23:44:18.570287 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x1fff] Jan 14 23:44:18.570374 kernel: pci 0000:00:05.0: bridge window [mem 0x14000000-0x141fffff] Jan 14 23:44:18.570455 kernel: pci 0000:00:05.0: bridge window [mem 0x8004000000-0x80041fffff 64bit pref] Jan 14 23:44:18.570539 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 14 23:44:18.570630 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 14 23:44:18.570707 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 14 23:44:18.570794 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 14 23:44:18.570871 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 14 23:44:18.570957 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 14 23:44:18.571032 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 14 23:44:18.571114 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 14 23:44:18.571190 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 14 23:44:18.571286 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 14 23:44:18.571371 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 14 23:44:18.571455 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 14 23:44:18.571533 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 14 23:44:18.571619 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 14 23:44:18.571694 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 14 23:44:18.571778 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 14 23:44:18.571852 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 14 23:44:18.571933 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 14 23:44:18.572007 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 14 23:44:18.572088 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 14 23:44:18.572163 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 14 23:44:18.572247 kernel: pci_bus 0000:0a: resource 1 [mem 0x11200000-0x113fffff] Jan 14 23:44:18.572334 kernel: pci_bus 0000:0a: resource 2 [mem 0x8001200000-0x80013fffff 64bit pref] Jan 14 23:44:18.572421 kernel: pci_bus 0000:0b: resource 1 [mem 0x11400000-0x115fffff] Jan 14 23:44:18.572497 kernel: pci_bus 0000:0b: resource 2 [mem 0x8001400000-0x80015fffff 64bit pref] Jan 14 23:44:18.572579 kernel: pci_bus 0000:0c: resource 1 [mem 0x11600000-0x117fffff] Jan 14 23:44:18.572657 kernel: pci_bus 0000:0c: resource 2 [mem 0x8001600000-0x80017fffff 64bit pref] Jan 14 23:44:18.572744 kernel: pci_bus 0000:0d: resource 1 [mem 0x11800000-0x119fffff] Jan 14 23:44:18.572820 kernel: pci_bus 0000:0d: resource 2 [mem 0x8001800000-0x80019fffff 64bit pref] Jan 14 23:44:18.572902 kernel: pci_bus 0000:0e: resource 1 [mem 0x11a00000-0x11bfffff] Jan 14 23:44:18.572977 kernel: pci_bus 0000:0e: resource 2 [mem 0x8001a00000-0x8001bfffff 64bit pref] Jan 14 23:44:18.573065 kernel: pci_bus 0000:0f: resource 1 [mem 0x11c00000-0x11dfffff] Jan 14 23:44:18.573141 kernel: pci_bus 0000:0f: resource 2 [mem 0x8001c00000-0x8001dfffff 64bit pref] Jan 14 23:44:18.573222 kernel: pci_bus 0000:10: resource 1 [mem 0x11e00000-0x11ffffff] Jan 14 23:44:18.573329 kernel: pci_bus 0000:10: resource 2 [mem 0x8001e00000-0x8001ffffff 64bit pref] Jan 14 23:44:18.573416 kernel: pci_bus 0000:11: resource 1 [mem 0x12000000-0x121fffff] Jan 14 23:44:18.573495 kernel: pci_bus 0000:11: resource 2 [mem 0x8002000000-0x80021fffff 64bit pref] Jan 14 23:44:18.573577 kernel: pci_bus 0000:12: resource 1 [mem 0x12200000-0x123fffff] Jan 14 23:44:18.573652 kernel: pci_bus 0000:12: resource 2 [mem 0x8002200000-0x80023fffff 64bit pref] Jan 14 23:44:18.573733 kernel: pci_bus 0000:13: resource 0 [io 0xf000-0xffff] Jan 14 23:44:18.573807 kernel: pci_bus 0000:13: resource 1 [mem 0x12400000-0x125fffff] Jan 14 23:44:18.573883 kernel: pci_bus 0000:13: resource 2 [mem 0x8002400000-0x80025fffff 64bit pref] Jan 14 23:44:18.573964 kernel: pci_bus 0000:14: resource 0 [io 0xe000-0xefff] Jan 14 23:44:18.574038 kernel: pci_bus 0000:14: resource 1 [mem 0x12600000-0x127fffff] Jan 14 23:44:18.574113 kernel: pci_bus 0000:14: resource 2 [mem 0x8002600000-0x80027fffff 64bit pref] Jan 14 23:44:18.574202 kernel: pci_bus 0000:15: resource 0 [io 0xd000-0xdfff] Jan 14 23:44:18.574295 kernel: pci_bus 0000:15: resource 1 [mem 0x12800000-0x129fffff] Jan 14 23:44:18.574377 kernel: pci_bus 0000:15: resource 2 [mem 0x8002800000-0x80029fffff 64bit pref] Jan 14 23:44:18.574457 kernel: pci_bus 0000:16: resource 0 [io 0xc000-0xcfff] Jan 14 23:44:18.574533 kernel: pci_bus 0000:16: resource 1 [mem 0x12a00000-0x12bfffff] Jan 14 23:44:18.574627 kernel: pci_bus 0000:16: resource 2 [mem 0x8002a00000-0x8002bfffff 64bit pref] Jan 14 23:44:18.574718 kernel: pci_bus 0000:17: resource 0 [io 0xb000-0xbfff] Jan 14 23:44:18.574798 kernel: pci_bus 0000:17: resource 1 [mem 0x12c00000-0x12dfffff] Jan 14 23:44:18.574873 kernel: pci_bus 0000:17: resource 2 [mem 0x8002c00000-0x8002dfffff 64bit pref] Jan 14 23:44:18.574955 kernel: pci_bus 0000:18: resource 0 [io 0xa000-0xafff] Jan 14 23:44:18.575031 kernel: pci_bus 0000:18: resource 1 [mem 0x12e00000-0x12ffffff] Jan 14 23:44:18.575106 kernel: pci_bus 0000:18: resource 2 [mem 0x8002e00000-0x8002ffffff 64bit pref] Jan 14 23:44:18.575188 kernel: pci_bus 0000:19: resource 0 [io 0x9000-0x9fff] Jan 14 23:44:18.575274 kernel: pci_bus 0000:19: resource 1 [mem 0x13000000-0x131fffff] Jan 14 23:44:18.575356 kernel: pci_bus 0000:19: resource 2 [mem 0x8003000000-0x80031fffff 64bit pref] Jan 14 23:44:18.575442 kernel: pci_bus 0000:1a: resource 0 [io 0x8000-0x8fff] Jan 14 23:44:18.575519 kernel: pci_bus 0000:1a: resource 1 [mem 0x13200000-0x133fffff] Jan 14 23:44:18.575594 kernel: pci_bus 0000:1a: resource 2 [mem 0x8003200000-0x80033fffff 64bit pref] Jan 14 23:44:18.575676 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jan 14 23:44:18.575754 kernel: pci_bus 0000:1b: resource 1 [mem 0x13400000-0x135fffff] Jan 14 23:44:18.575828 kernel: pci_bus 0000:1b: resource 2 [mem 0x8003400000-0x80035fffff 64bit pref] Jan 14 23:44:18.575909 kernel: pci_bus 0000:1c: resource 0 [io 0x6000-0x6fff] Jan 14 23:44:18.575985 kernel: pci_bus 0000:1c: resource 1 [mem 0x13600000-0x137fffff] Jan 14 23:44:18.576059 kernel: pci_bus 0000:1c: resource 2 [mem 0x8003600000-0x80037fffff 64bit pref] Jan 14 23:44:18.576143 kernel: pci_bus 0000:1d: resource 0 [io 0x5000-0x5fff] Jan 14 23:44:18.576218 kernel: pci_bus 0000:1d: resource 1 [mem 0x13800000-0x139fffff] Jan 14 23:44:18.576302 kernel: pci_bus 0000:1d: resource 2 [mem 0x8003800000-0x80039fffff 64bit pref] Jan 14 23:44:18.576384 kernel: pci_bus 0000:1e: resource 0 [io 0x4000-0x4fff] Jan 14 23:44:18.576459 kernel: pci_bus 0000:1e: resource 1 [mem 0x13a00000-0x13bfffff] Jan 14 23:44:18.576534 kernel: pci_bus 0000:1e: resource 2 [mem 0x8003a00000-0x8003bfffff 64bit pref] Jan 14 23:44:18.576616 kernel: pci_bus 0000:1f: resource 0 [io 0x3000-0x3fff] Jan 14 23:44:18.576690 kernel: pci_bus 0000:1f: resource 1 [mem 0x13c00000-0x13dfffff] Jan 14 23:44:18.576765 kernel: pci_bus 0000:1f: resource 2 [mem 0x8003c00000-0x8003dfffff 64bit pref] Jan 14 23:44:18.576851 kernel: pci_bus 0000:20: resource 0 [io 0x2000-0x2fff] Jan 14 23:44:18.576925 kernel: pci_bus 0000:20: resource 1 [mem 0x13e00000-0x13ffffff] Jan 14 23:44:18.577002 kernel: pci_bus 0000:20: resource 2 [mem 0x8003e00000-0x8003ffffff 64bit pref] Jan 14 23:44:18.577083 kernel: pci_bus 0000:21: resource 0 [io 0x1000-0x1fff] Jan 14 23:44:18.577158 kernel: pci_bus 0000:21: resource 1 [mem 0x14000000-0x141fffff] Jan 14 23:44:18.577232 kernel: pci_bus 0000:21: resource 2 [mem 0x8004000000-0x80041fffff 64bit pref] Jan 14 23:44:18.577243 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 14 23:44:18.577251 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 14 23:44:18.577259 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 14 23:44:18.577283 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 14 23:44:18.577292 kernel: iommu: Default domain type: Translated Jan 14 23:44:18.577300 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 14 23:44:18.577308 kernel: efivars: Registered efivars operations Jan 14 23:44:18.577316 kernel: vgaarb: loaded Jan 14 23:44:18.577324 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 14 23:44:18.577331 kernel: VFS: Disk quotas dquot_6.6.0 Jan 14 23:44:18.577341 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 14 23:44:18.577349 kernel: pnp: PnP ACPI init Jan 14 23:44:18.577446 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 14 23:44:18.577458 kernel: pnp: PnP ACPI: found 1 devices Jan 14 23:44:18.577466 kernel: NET: Registered PF_INET protocol family Jan 14 23:44:18.577474 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 14 23:44:18.577484 kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear) Jan 14 23:44:18.577493 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 14 23:44:18.577501 kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 14 23:44:18.577509 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 14 23:44:18.577517 kernel: TCP: Hash tables configured (established 131072 bind 65536) Jan 14 23:44:18.577526 kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, linear) Jan 14 23:44:18.577534 kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear) Jan 14 23:44:18.577544 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 14 23:44:18.577633 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 14 23:44:18.577645 kernel: PCI: CLS 0 bytes, default 64 Jan 14 23:44:18.577653 kernel: kvm [1]: HYP mode not available Jan 14 23:44:18.577661 kernel: Initialise system trusted keyrings Jan 14 23:44:18.577669 kernel: workingset: timestamp_bits=39 max_order=22 bucket_order=0 Jan 14 23:44:18.577677 kernel: Key type asymmetric registered Jan 14 23:44:18.577687 kernel: Asymmetric key parser 'x509' registered Jan 14 23:44:18.577695 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 14 23:44:18.577703 kernel: io scheduler mq-deadline registered Jan 14 23:44:18.577710 kernel: io scheduler kyber registered Jan 14 23:44:18.577718 kernel: io scheduler bfq registered Jan 14 23:44:18.577727 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 14 23:44:18.577811 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 50 Jan 14 23:44:18.577892 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 50 Jan 14 23:44:18.577974 kernel: pcieport 0000:00:01.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:44:18.578057 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 51 Jan 14 23:44:18.578138 kernel: pcieport 0000:00:01.1: AER: enabled with IRQ 51 Jan 14 23:44:18.578218 kernel: pcieport 0000:00:01.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:44:18.578313 kernel: pcieport 0000:00:01.2: PME: Signaling with IRQ 52 Jan 14 23:44:18.578397 kernel: pcieport 0000:00:01.2: AER: enabled with IRQ 52 Jan 14 23:44:18.578478 kernel: pcieport 0000:00:01.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:44:18.578584 kernel: pcieport 0000:00:01.3: PME: Signaling with IRQ 53 Jan 14 23:44:18.578678 kernel: pcieport 0000:00:01.3: AER: enabled with IRQ 53 Jan 14 23:44:18.578762 kernel: pcieport 0000:00:01.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:44:18.578846 kernel: pcieport 0000:00:01.4: PME: Signaling with IRQ 54 Jan 14 23:44:18.578936 kernel: pcieport 0000:00:01.4: AER: enabled with IRQ 54 Jan 14 23:44:18.579024 kernel: pcieport 0000:00:01.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:44:18.579107 kernel: pcieport 0000:00:01.5: PME: Signaling with IRQ 55 Jan 14 23:44:18.579189 kernel: pcieport 0000:00:01.5: AER: enabled with IRQ 55 Jan 14 23:44:18.579280 kernel: pcieport 0000:00:01.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:44:18.579366 kernel: pcieport 0000:00:01.6: PME: Signaling with IRQ 56 Jan 14 23:44:18.579447 kernel: pcieport 0000:00:01.6: AER: enabled with IRQ 56 Jan 14 23:44:18.579527 kernel: pcieport 0000:00:01.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:44:18.579613 kernel: pcieport 0000:00:01.7: PME: Signaling with IRQ 57 Jan 14 23:44:18.579693 kernel: pcieport 0000:00:01.7: AER: enabled with IRQ 57 Jan 14 23:44:18.579774 kernel: pcieport 0000:00:01.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:44:18.579785 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 14 23:44:18.579866 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 58 Jan 14 23:44:18.579945 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 58 Jan 14 23:44:18.580026 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:44:18.580108 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 59 Jan 14 23:44:18.580188 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 59 Jan 14 23:44:18.580311 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:44:18.580407 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 60 Jan 14 23:44:18.580488 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 60 Jan 14 23:44:18.580570 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:44:18.580653 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 61 Jan 14 23:44:18.580739 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 61 Jan 14 23:44:18.580818 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:44:18.580900 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 62 Jan 14 23:44:18.580981 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 62 Jan 14 23:44:18.581060 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:44:18.581144 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 63 Jan 14 23:44:18.581224 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 63 Jan 14 23:44:18.581317 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:44:18.581402 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 64 Jan 14 23:44:18.581482 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 64 Jan 14 23:44:18.581561 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:44:18.581646 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 65 Jan 14 23:44:18.581726 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 65 Jan 14 23:44:18.581806 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:44:18.581816 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 14 23:44:18.581898 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 66 Jan 14 23:44:18.581979 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 66 Jan 14 23:44:18.582060 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:44:18.582143 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 67 Jan 14 23:44:18.582223 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 67 Jan 14 23:44:18.582313 kernel: pcieport 0000:00:03.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:44:18.582397 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 68 Jan 14 23:44:18.582477 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 68 Jan 14 23:44:18.582572 kernel: pcieport 0000:00:03.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:44:18.582670 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 69 Jan 14 23:44:18.582752 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 69 Jan 14 23:44:18.582832 kernel: pcieport 0000:00:03.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:44:18.582914 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 70 Jan 14 23:44:18.582994 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 70 Jan 14 23:44:18.583074 kernel: pcieport 0000:00:03.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:44:18.583160 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 71 Jan 14 23:44:18.583240 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 71 Jan 14 23:44:18.583334 kernel: pcieport 0000:00:03.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:44:18.583420 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 72 Jan 14 23:44:18.583500 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 72 Jan 14 23:44:18.583579 kernel: pcieport 0000:00:03.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:44:18.583664 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 73 Jan 14 23:44:18.583744 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 73 Jan 14 23:44:18.583824 kernel: pcieport 0000:00:03.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:44:18.583835 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 14 23:44:18.583914 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 74 Jan 14 23:44:18.583995 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 74 Jan 14 23:44:18.584074 kernel: pcieport 0000:00:04.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:44:18.584160 kernel: pcieport 0000:00:04.1: PME: Signaling with IRQ 75 Jan 14 23:44:18.584241 kernel: pcieport 0000:00:04.1: AER: enabled with IRQ 75 Jan 14 23:44:18.584330 kernel: pcieport 0000:00:04.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:44:18.584415 kernel: pcieport 0000:00:04.2: PME: Signaling with IRQ 76 Jan 14 23:44:18.584496 kernel: pcieport 0000:00:04.2: AER: enabled with IRQ 76 Jan 14 23:44:18.584575 kernel: pcieport 0000:00:04.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:44:18.584661 kernel: pcieport 0000:00:04.3: PME: Signaling with IRQ 77 Jan 14 23:44:18.584742 kernel: pcieport 0000:00:04.3: AER: enabled with IRQ 77 Jan 14 23:44:18.584822 kernel: pcieport 0000:00:04.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:44:18.584906 kernel: pcieport 0000:00:04.4: PME: Signaling with IRQ 78 Jan 14 23:44:18.584988 kernel: pcieport 0000:00:04.4: AER: enabled with IRQ 78 Jan 14 23:44:18.585067 kernel: pcieport 0000:00:04.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:44:18.585154 kernel: pcieport 0000:00:04.5: PME: Signaling with IRQ 79 Jan 14 23:44:18.585235 kernel: pcieport 0000:00:04.5: AER: enabled with IRQ 79 Jan 14 23:44:18.585327 kernel: pcieport 0000:00:04.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:44:18.585415 kernel: pcieport 0000:00:04.6: PME: Signaling with IRQ 80 Jan 14 23:44:18.585500 kernel: pcieport 0000:00:04.6: AER: enabled with IRQ 80 Jan 14 23:44:18.585580 kernel: pcieport 0000:00:04.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:44:18.585667 kernel: pcieport 0000:00:04.7: PME: Signaling with IRQ 81 Jan 14 23:44:18.585749 kernel: pcieport 0000:00:04.7: AER: enabled with IRQ 81 Jan 14 23:44:18.585830 kernel: pcieport 0000:00:04.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:44:18.585913 kernel: pcieport 0000:00:05.0: PME: Signaling with IRQ 82 Jan 14 23:44:18.585996 kernel: pcieport 0000:00:05.0: AER: enabled with IRQ 82 Jan 14 23:44:18.586075 kernel: pcieport 0000:00:05.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 14 23:44:18.586086 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 14 23:44:18.586096 kernel: ACPI: button: Power Button [PWRB] Jan 14 23:44:18.586182 kernel: virtio-pci 0000:01:00.0: enabling device (0000 -> 0002) Jan 14 23:44:18.586285 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 14 23:44:18.586298 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 14 23:44:18.586306 kernel: thunder_xcv, ver 1.0 Jan 14 23:44:18.586314 kernel: thunder_bgx, ver 1.0 Jan 14 23:44:18.586322 kernel: nicpf, ver 1.0 Jan 14 23:44:18.586333 kernel: nicvf, ver 1.0 Jan 14 23:44:18.586435 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 14 23:44:18.586515 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-14T23:44:17 UTC (1768434257) Jan 14 23:44:18.586525 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 14 23:44:18.586534 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 14 23:44:18.586542 kernel: watchdog: NMI not fully supported Jan 14 23:44:18.586552 kernel: watchdog: Hard watchdog permanently disabled Jan 14 23:44:18.586574 kernel: NET: Registered PF_INET6 protocol family Jan 14 23:44:18.586583 kernel: Segment Routing with IPv6 Jan 14 23:44:18.586591 kernel: In-situ OAM (IOAM) with IPv6 Jan 14 23:44:18.586599 kernel: NET: Registered PF_PACKET protocol family Jan 14 23:44:18.586607 kernel: Key type dns_resolver registered Jan 14 23:44:18.586615 kernel: registered taskstats version 1 Jan 14 23:44:18.586626 kernel: Loading compiled-in X.509 certificates Jan 14 23:44:18.586634 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: a690a20944211e11dad41e677dd7158a4ddc3c87' Jan 14 23:44:18.586646 kernel: Demotion targets for Node 0: null Jan 14 23:44:18.586656 kernel: Key type .fscrypt registered Jan 14 23:44:18.586664 kernel: Key type fscrypt-provisioning registered Jan 14 23:44:18.586671 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 14 23:44:18.586679 kernel: ima: Allocated hash algorithm: sha1 Jan 14 23:44:18.586687 kernel: ima: No architecture policies found Jan 14 23:44:18.586697 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 14 23:44:18.586705 kernel: clk: Disabling unused clocks Jan 14 23:44:18.586713 kernel: PM: genpd: Disabling unused power domains Jan 14 23:44:18.586721 kernel: Freeing unused kernel memory: 12416K Jan 14 23:44:18.586729 kernel: Run /init as init process Jan 14 23:44:18.586737 kernel: with arguments: Jan 14 23:44:18.586745 kernel: /init Jan 14 23:44:18.586754 kernel: with environment: Jan 14 23:44:18.586762 kernel: HOME=/ Jan 14 23:44:18.586770 kernel: TERM=linux Jan 14 23:44:18.586778 kernel: ACPI: bus type USB registered Jan 14 23:44:18.586786 kernel: usbcore: registered new interface driver usbfs Jan 14 23:44:18.586793 kernel: usbcore: registered new interface driver hub Jan 14 23:44:18.586801 kernel: usbcore: registered new device driver usb Jan 14 23:44:18.586910 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 14 23:44:18.586995 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 14 23:44:18.587079 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 14 23:44:18.587162 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 14 23:44:18.587245 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 14 23:44:18.587352 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 14 23:44:18.587466 kernel: hub 1-0:1.0: USB hub found Jan 14 23:44:18.587571 kernel: hub 1-0:1.0: 4 ports detected Jan 14 23:44:18.587681 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 14 23:44:18.587782 kernel: hub 2-0:1.0: USB hub found Jan 14 23:44:18.587871 kernel: hub 2-0:1.0: 4 ports detected Jan 14 23:44:18.587966 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 14 23:44:18.588051 kernel: virtio_blk virtio1: [vda] 104857600 512-byte logical blocks (53.7 GB/50.0 GiB) Jan 14 23:44:18.588062 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 14 23:44:18.588071 kernel: GPT:25804799 != 104857599 Jan 14 23:44:18.588079 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 14 23:44:18.588088 kernel: GPT:25804799 != 104857599 Jan 14 23:44:18.588096 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 14 23:44:18.588106 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 14 23:44:18.588114 kernel: SCSI subsystem initialized Jan 14 23:44:18.588123 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 14 23:44:18.588131 kernel: device-mapper: uevent: version 1.0.3 Jan 14 23:44:18.588140 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 14 23:44:18.588148 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 14 23:44:18.588158 kernel: raid6: neonx8 gen() 15713 MB/s Jan 14 23:44:18.588166 kernel: raid6: neonx4 gen() 15681 MB/s Jan 14 23:44:18.588175 kernel: raid6: neonx2 gen() 13202 MB/s Jan 14 23:44:18.588183 kernel: raid6: neonx1 gen() 10316 MB/s Jan 14 23:44:18.588191 kernel: raid6: int64x8 gen() 6810 MB/s Jan 14 23:44:18.588200 kernel: raid6: int64x4 gen() 7315 MB/s Jan 14 23:44:18.588208 kernel: raid6: int64x2 gen() 6090 MB/s Jan 14 23:44:18.588216 kernel: raid6: int64x1 gen() 5053 MB/s Jan 14 23:44:18.588226 kernel: raid6: using algorithm neonx8 gen() 15713 MB/s Jan 14 23:44:18.588234 kernel: raid6: .... xor() 11942 MB/s, rmw enabled Jan 14 23:44:18.588242 kernel: raid6: using neon recovery algorithm Jan 14 23:44:18.588251 kernel: xor: measuring software checksum speed Jan 14 23:44:18.588261 kernel: 8regs : 21150 MB/sec Jan 14 23:44:18.588281 kernel: 32regs : 21699 MB/sec Jan 14 23:44:18.588292 kernel: arm64_neon : 28167 MB/sec Jan 14 23:44:18.588301 kernel: xor: using function: arm64_neon (28167 MB/sec) Jan 14 23:44:18.588409 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 14 23:44:18.588422 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 14 23:44:18.588431 kernel: BTRFS: device fsid 78d59ed4-d19c-4fcc-8998-5f0c19b42daf devid 1 transid 38 /dev/mapper/usr (253:0) scanned by mount (275) Jan 14 23:44:18.588440 kernel: BTRFS info (device dm-0): first mount of filesystem 78d59ed4-d19c-4fcc-8998-5f0c19b42daf Jan 14 23:44:18.588448 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 14 23:44:18.588459 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 14 23:44:18.588467 kernel: BTRFS info (device dm-0): enabling free space tree Jan 14 23:44:18.588476 kernel: loop: module loaded Jan 14 23:44:18.588484 kernel: loop0: detected capacity change from 0 to 91488 Jan 14 23:44:18.588492 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 14 23:44:18.588593 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 14 23:44:18.588608 systemd[1]: Successfully made /usr/ read-only. Jan 14 23:44:18.588620 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 23:44:18.588629 systemd[1]: Detected virtualization kvm. Jan 14 23:44:18.588637 systemd[1]: Detected architecture arm64. Jan 14 23:44:18.588646 systemd[1]: Running in initrd. Jan 14 23:44:18.588654 systemd[1]: No hostname configured, using default hostname. Jan 14 23:44:18.588665 systemd[1]: Hostname set to . Jan 14 23:44:18.588674 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 23:44:18.588682 systemd[1]: Queued start job for default target initrd.target. Jan 14 23:44:18.588691 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 23:44:18.588699 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 23:44:18.588708 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 23:44:18.588719 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 14 23:44:18.588728 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 23:44:18.588737 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 14 23:44:18.588746 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 14 23:44:18.588755 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 23:44:18.588764 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 23:44:18.588774 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 14 23:44:18.588783 systemd[1]: Reached target paths.target - Path Units. Jan 14 23:44:18.588792 systemd[1]: Reached target slices.target - Slice Units. Jan 14 23:44:18.588800 systemd[1]: Reached target swap.target - Swaps. Jan 14 23:44:18.588809 systemd[1]: Reached target timers.target - Timer Units. Jan 14 23:44:18.588818 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 23:44:18.588827 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 23:44:18.588838 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 23:44:18.588846 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 14 23:44:18.588856 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 14 23:44:18.588865 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 23:44:18.588875 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 23:44:18.588884 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 23:44:18.588894 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 23:44:18.588903 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 14 23:44:18.588912 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 14 23:44:18.588921 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 23:44:18.588930 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 14 23:44:18.588939 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 14 23:44:18.588948 systemd[1]: Starting systemd-fsck-usr.service... Jan 14 23:44:18.588958 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 23:44:18.588966 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 23:44:18.588976 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 23:44:18.588985 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 14 23:44:18.588996 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 23:44:18.589004 systemd[1]: Finished systemd-fsck-usr.service. Jan 14 23:44:18.589014 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 14 23:44:18.589047 systemd-journald[416]: Collecting audit messages is enabled. Jan 14 23:44:18.589070 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 14 23:44:18.589078 kernel: Bridge firewalling registered Jan 14 23:44:18.589087 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 23:44:18.589096 kernel: audit: type=1130 audit(1768434258.530:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:18.589105 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 14 23:44:18.589116 kernel: audit: type=1130 audit(1768434258.534:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:18.589125 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 23:44:18.589134 kernel: audit: type=1130 audit(1768434258.539:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:18.589143 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 14 23:44:18.589152 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 23:44:18.589161 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 23:44:18.589171 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 23:44:18.589180 kernel: audit: type=1130 audit(1768434258.566:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:18.589189 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 23:44:18.589200 kernel: audit: type=1130 audit(1768434258.571:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:18.589208 kernel: audit: type=1334 audit(1768434258.573:7): prog-id=6 op=LOAD Jan 14 23:44:18.589217 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 23:44:18.589226 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 23:44:18.589237 kernel: audit: type=1130 audit(1768434258.582:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:18.589246 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 14 23:44:18.589255 systemd-journald[416]: Journal started Jan 14 23:44:18.589292 systemd-journald[416]: Runtime Journal (/run/log/journal/c7cbdb0d0e7e47fe85347ae482ef18a3) is 8M, max 319.5M, 311.5M free. Jan 14 23:44:18.530000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:18.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:18.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:18.566000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:18.571000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:18.573000 audit: BPF prog-id=6 op=LOAD Jan 14 23:44:18.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:18.528100 systemd-modules-load[420]: Inserted module 'br_netfilter' Jan 14 23:44:18.591028 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 23:44:18.591000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:18.596309 kernel: audit: type=1130 audit(1768434258.591:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:18.597556 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 23:44:18.605301 dracut-cmdline[450]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=openstack verity.usrhash=e4a6d042213df6c386c00b2ef561482ef59cf24ca6770345ce520c577e366e5a Jan 14 23:44:18.616455 systemd-tmpfiles[463]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 14 23:44:18.622177 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 23:44:18.626940 kernel: audit: type=1130 audit(1768434258.623:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:18.623000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:18.638895 systemd-resolved[444]: Positive Trust Anchors: Jan 14 23:44:18.638913 systemd-resolved[444]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 23:44:18.638916 systemd-resolved[444]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 23:44:18.638947 systemd-resolved[444]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 23:44:18.664003 systemd-resolved[444]: Defaulting to hostname 'linux'. Jan 14 23:44:18.664861 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 23:44:18.665000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:18.665884 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 23:44:18.705303 kernel: Loading iSCSI transport class v2.0-870. Jan 14 23:44:18.715298 kernel: iscsi: registered transport (tcp) Jan 14 23:44:18.729315 kernel: iscsi: registered transport (qla4xxx) Jan 14 23:44:18.729350 kernel: QLogic iSCSI HBA Driver Jan 14 23:44:18.758233 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 23:44:18.784440 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 23:44:18.785000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:18.786603 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 23:44:18.829935 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 14 23:44:18.830000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:18.832285 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 14 23:44:18.833749 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 14 23:44:18.869816 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 14 23:44:18.870000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:18.871000 audit: BPF prog-id=7 op=LOAD Jan 14 23:44:18.871000 audit: BPF prog-id=8 op=LOAD Jan 14 23:44:18.872656 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 23:44:18.908504 systemd-udevd[697]: Using default interface naming scheme 'v257'. Jan 14 23:44:18.916185 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 23:44:18.917000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:18.920442 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 14 23:44:18.935936 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 23:44:18.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:18.937000 audit: BPF prog-id=9 op=LOAD Jan 14 23:44:18.938641 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 23:44:18.945072 dracut-pre-trigger[782]: rd.md=0: removing MD RAID activation Jan 14 23:44:18.969349 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 23:44:18.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:18.971210 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 23:44:18.983465 systemd-networkd[804]: lo: Link UP Jan 14 23:44:18.983473 systemd-networkd[804]: lo: Gained carrier Jan 14 23:44:18.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:18.984095 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 23:44:18.985453 systemd[1]: Reached target network.target - Network. Jan 14 23:44:19.060675 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 23:44:19.061000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:19.063405 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 14 23:44:19.147238 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 14 23:44:19.158047 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 14 23:44:19.158091 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 14 23:44:19.161630 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:01.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 14 23:44:19.162632 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 14 23:44:19.182638 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 14 23:44:19.190811 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 23:44:19.193061 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 14 23:44:19.202858 systemd-networkd[804]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 23:44:19.202872 systemd-networkd[804]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 23:44:19.203313 systemd-networkd[804]: eth0: Link UP Jan 14 23:44:19.206406 systemd-networkd[804]: eth0: Gained carrier Jan 14 23:44:19.206420 systemd-networkd[804]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 23:44:19.208000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:19.207226 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 23:44:19.207494 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 23:44:19.216388 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 14 23:44:19.217875 kernel: usbcore: registered new interface driver usbhid Jan 14 23:44:19.217898 kernel: usbhid: USB HID core driver Jan 14 23:44:19.208726 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 23:44:19.219408 disk-uuid[878]: Primary Header is updated. Jan 14 23:44:19.219408 disk-uuid[878]: Secondary Entries is updated. Jan 14 23:44:19.219408 disk-uuid[878]: Secondary Header is updated. Jan 14 23:44:19.213610 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 23:44:19.253024 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 23:44:19.253000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:19.254344 systemd-networkd[804]: eth0: DHCPv4 address 10.0.22.230/25, gateway 10.0.22.129 acquired from 10.0.22.129 Jan 14 23:44:19.297353 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 14 23:44:19.298000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:19.298904 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 23:44:19.300504 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 23:44:19.302421 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 23:44:19.305014 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 14 23:44:19.336157 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 14 23:44:19.336000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:20.251802 disk-uuid[880]: Warning: The kernel is still using the old partition table. Jan 14 23:44:20.251802 disk-uuid[880]: The new table will be used at the next reboot or after you Jan 14 23:44:20.251802 disk-uuid[880]: run partprobe(8) or kpartx(8) Jan 14 23:44:20.251802 disk-uuid[880]: The operation has completed successfully. Jan 14 23:44:20.260513 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 14 23:44:20.261000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:20.261000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:20.260620 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 14 23:44:20.262520 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 14 23:44:20.296308 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (911) Jan 14 23:44:20.298042 kernel: BTRFS info (device vda6): first mount of filesystem 0eb28982-35f7-4b76-8133-b752f60f3941 Jan 14 23:44:20.298078 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 14 23:44:20.303306 kernel: BTRFS info (device vda6): turning on async discard Jan 14 23:44:20.303343 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 23:44:20.308271 kernel: BTRFS info (device vda6): last unmount of filesystem 0eb28982-35f7-4b76-8133-b752f60f3941 Jan 14 23:44:20.308638 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 14 23:44:20.309000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:20.310733 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 14 23:44:20.467209 ignition[930]: Ignition 2.22.0 Jan 14 23:44:20.467227 ignition[930]: Stage: fetch-offline Jan 14 23:44:20.467261 ignition[930]: no configs at "/usr/lib/ignition/base.d" Jan 14 23:44:20.467284 ignition[930]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 23:44:20.467446 ignition[930]: parsed url from cmdline: "" Jan 14 23:44:20.470506 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 23:44:20.472000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:20.467449 ignition[930]: no config URL provided Jan 14 23:44:20.467454 ignition[930]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 23:44:20.467461 ignition[930]: no config at "/usr/lib/ignition/user.ign" Jan 14 23:44:20.467465 ignition[930]: failed to fetch config: resource requires networking Jan 14 23:44:20.467623 ignition[930]: Ignition finished successfully Jan 14 23:44:20.475044 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 14 23:44:20.508565 ignition[940]: Ignition 2.22.0 Jan 14 23:44:20.508576 ignition[940]: Stage: fetch Jan 14 23:44:20.508711 ignition[940]: no configs at "/usr/lib/ignition/base.d" Jan 14 23:44:20.508718 ignition[940]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 23:44:20.508798 ignition[940]: parsed url from cmdline: "" Jan 14 23:44:20.508802 ignition[940]: no config URL provided Jan 14 23:44:20.508806 ignition[940]: reading system config file "/usr/lib/ignition/user.ign" Jan 14 23:44:20.508812 ignition[940]: no config at "/usr/lib/ignition/user.ign" Jan 14 23:44:20.509032 ignition[940]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 14 23:44:20.509053 ignition[940]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 14 23:44:20.509287 ignition[940]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 14 23:44:20.604641 systemd-networkd[804]: eth0: Gained IPv6LL Jan 14 23:44:20.853614 ignition[940]: GET result: OK Jan 14 23:44:20.853881 ignition[940]: parsing config with SHA512: f7e3aba1f9a139a8ac554cc84a2e0b285c66a6d9907f93555e940f7d9e989401d8f08b63fee64f74215af53255c57656979507374a4ead4d5e21c0c00c57925e Jan 14 23:44:20.858179 unknown[940]: fetched base config from "system" Jan 14 23:44:20.858197 unknown[940]: fetched base config from "system" Jan 14 23:44:20.858839 ignition[940]: fetch: fetch complete Jan 14 23:44:20.858203 unknown[940]: fetched user config from "openstack" Jan 14 23:44:20.858844 ignition[940]: fetch: fetch passed Jan 14 23:44:20.865132 kernel: kauditd_printk_skb: 20 callbacks suppressed Jan 14 23:44:20.865155 kernel: audit: type=1130 audit(1768434260.861:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:20.861000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:20.860542 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 14 23:44:20.858901 ignition[940]: Ignition finished successfully Jan 14 23:44:20.863115 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 14 23:44:20.888687 ignition[948]: Ignition 2.22.0 Jan 14 23:44:20.888705 ignition[948]: Stage: kargs Jan 14 23:44:20.888844 ignition[948]: no configs at "/usr/lib/ignition/base.d" Jan 14 23:44:20.888852 ignition[948]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 23:44:20.889576 ignition[948]: kargs: kargs passed Jan 14 23:44:20.889619 ignition[948]: Ignition finished successfully Jan 14 23:44:20.892000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:20.892020 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 14 23:44:20.895639 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 14 23:44:20.897691 kernel: audit: type=1130 audit(1768434260.892:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:20.929681 ignition[956]: Ignition 2.22.0 Jan 14 23:44:20.929700 ignition[956]: Stage: disks Jan 14 23:44:20.929844 ignition[956]: no configs at "/usr/lib/ignition/base.d" Jan 14 23:44:20.929852 ignition[956]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 23:44:20.930580 ignition[956]: disks: disks passed Jan 14 23:44:20.933511 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 14 23:44:20.930623 ignition[956]: Ignition finished successfully Jan 14 23:44:20.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:20.936577 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 14 23:44:20.940628 kernel: audit: type=1130 audit(1768434260.936:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:20.939839 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 14 23:44:20.941540 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 23:44:20.943109 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 23:44:20.944630 systemd[1]: Reached target basic.target - Basic System. Jan 14 23:44:20.946943 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 14 23:44:20.985711 systemd-fsck[966]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 14 23:44:20.987817 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 14 23:44:20.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:20.990080 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 14 23:44:20.993862 kernel: audit: type=1130 audit(1768434260.989:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:21.087303 kernel: EXT4-fs (vda9): mounted filesystem 05dab3f9-40c2-46d9-a2a2-3da8ed7c4451 r/w with ordered data mode. Quota mode: none. Jan 14 23:44:21.088291 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 14 23:44:21.089403 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 14 23:44:21.092335 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 23:44:21.094160 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 14 23:44:21.095099 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 14 23:44:21.095673 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 14 23:44:21.097965 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 14 23:44:21.097994 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 23:44:21.114019 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 14 23:44:21.115957 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 14 23:44:21.125387 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (974) Jan 14 23:44:21.128632 kernel: BTRFS info (device vda6): first mount of filesystem 0eb28982-35f7-4b76-8133-b752f60f3941 Jan 14 23:44:21.128796 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 14 23:44:21.135144 kernel: BTRFS info (device vda6): turning on async discard Jan 14 23:44:21.135263 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 23:44:21.136397 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 23:44:21.172563 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 23:44:21.181993 initrd-setup-root[1004]: cut: /sysroot/etc/passwd: No such file or directory Jan 14 23:44:21.186404 initrd-setup-root[1011]: cut: /sysroot/etc/group: No such file or directory Jan 14 23:44:21.190763 initrd-setup-root[1018]: cut: /sysroot/etc/shadow: No such file or directory Jan 14 23:44:21.195588 initrd-setup-root[1025]: cut: /sysroot/etc/gshadow: No such file or directory Jan 14 23:44:21.279794 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 14 23:44:21.280000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:21.281927 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 14 23:44:21.285099 kernel: audit: type=1130 audit(1768434261.280:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:21.285045 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 14 23:44:21.297713 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 14 23:44:21.299303 kernel: BTRFS info (device vda6): last unmount of filesystem 0eb28982-35f7-4b76-8133-b752f60f3941 Jan 14 23:44:21.317675 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 14 23:44:21.321342 kernel: audit: type=1130 audit(1768434261.318:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:21.318000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:21.329829 ignition[1092]: INFO : Ignition 2.22.0 Jan 14 23:44:21.329829 ignition[1092]: INFO : Stage: mount Jan 14 23:44:21.332278 ignition[1092]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 23:44:21.332278 ignition[1092]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 23:44:21.332278 ignition[1092]: INFO : mount: mount passed Jan 14 23:44:21.332278 ignition[1092]: INFO : Ignition finished successfully Jan 14 23:44:21.338230 kernel: audit: type=1130 audit(1768434261.333:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:21.333000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:21.333007 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 14 23:44:22.207366 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 23:44:24.216380 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 23:44:28.223350 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 23:44:28.229063 coreos-metadata[976]: Jan 14 23:44:28.228 WARN failed to locate config-drive, using the metadata service API instead Jan 14 23:44:28.247280 coreos-metadata[976]: Jan 14 23:44:28.247 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 14 23:44:28.393984 coreos-metadata[976]: Jan 14 23:44:28.393 INFO Fetch successful Jan 14 23:44:28.395224 coreos-metadata[976]: Jan 14 23:44:28.394 INFO wrote hostname ci-4515-1-0-n-1d3be4f164 to /sysroot/etc/hostname Jan 14 23:44:28.397483 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 14 23:44:28.398331 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 14 23:44:28.399000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:28.399000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:28.404383 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 14 23:44:28.407518 kernel: audit: type=1130 audit(1768434268.399:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:28.407540 kernel: audit: type=1131 audit(1768434268.399:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:28.423624 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 14 23:44:28.445342 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1111) Jan 14 23:44:28.448244 kernel: BTRFS info (device vda6): first mount of filesystem 0eb28982-35f7-4b76-8133-b752f60f3941 Jan 14 23:44:28.448285 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 14 23:44:28.453320 kernel: BTRFS info (device vda6): turning on async discard Jan 14 23:44:28.453340 kernel: BTRFS info (device vda6): enabling free space tree Jan 14 23:44:28.454865 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 14 23:44:28.484530 ignition[1129]: INFO : Ignition 2.22.0 Jan 14 23:44:28.484530 ignition[1129]: INFO : Stage: files Jan 14 23:44:28.486357 ignition[1129]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 23:44:28.486357 ignition[1129]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 23:44:28.486357 ignition[1129]: DEBUG : files: compiled without relabeling support, skipping Jan 14 23:44:28.490137 ignition[1129]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 14 23:44:28.490137 ignition[1129]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 14 23:44:28.492984 ignition[1129]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 14 23:44:28.494418 ignition[1129]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 14 23:44:28.494418 ignition[1129]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 14 23:44:28.493541 unknown[1129]: wrote ssh authorized keys file for user: core Jan 14 23:44:28.500174 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 14 23:44:28.501941 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 14 23:44:29.200549 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 14 23:44:29.365484 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 14 23:44:29.367208 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 14 23:44:29.367208 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 14 23:44:29.367208 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 14 23:44:29.367208 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 14 23:44:29.367208 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 23:44:29.367208 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 14 23:44:29.367208 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 23:44:29.367208 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 14 23:44:29.379239 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 23:44:29.379239 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 14 23:44:29.379239 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 14 23:44:29.379239 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 14 23:44:29.379239 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 14 23:44:29.379239 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 14 23:44:29.749788 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 14 23:44:31.180876 ignition[1129]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 14 23:44:31.180876 ignition[1129]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 14 23:44:31.184572 ignition[1129]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 23:44:31.186231 ignition[1129]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 14 23:44:31.186231 ignition[1129]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 14 23:44:31.186231 ignition[1129]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 14 23:44:31.186231 ignition[1129]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 14 23:44:31.195053 kernel: audit: type=1130 audit(1768434271.189:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.189000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.195118 ignition[1129]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 14 23:44:31.195118 ignition[1129]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 14 23:44:31.195118 ignition[1129]: INFO : files: files passed Jan 14 23:44:31.195118 ignition[1129]: INFO : Ignition finished successfully Jan 14 23:44:31.187989 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 14 23:44:31.191144 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 14 23:44:31.194598 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 14 23:44:31.209523 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 14 23:44:31.209641 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 14 23:44:31.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.211000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.216827 kernel: audit: type=1130 audit(1768434271.211:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.216865 kernel: audit: type=1131 audit(1768434271.211:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.219995 initrd-setup-root-after-ignition[1163]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 23:44:31.219995 initrd-setup-root-after-ignition[1163]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 14 23:44:31.222718 initrd-setup-root-after-ignition[1167]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 14 23:44:31.223053 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 23:44:31.224000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.225600 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 14 23:44:31.230503 kernel: audit: type=1130 audit(1768434271.224:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.230418 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 14 23:44:31.261864 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 14 23:44:31.262000 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 14 23:44:31.268848 kernel: audit: type=1130 audit(1768434271.263:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.268872 kernel: audit: type=1131 audit(1768434271.263:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.263000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.263000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.263939 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 14 23:44:31.269638 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 14 23:44:31.271366 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 14 23:44:31.272227 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 14 23:44:31.309723 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 23:44:31.310000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.311986 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 14 23:44:31.315567 kernel: audit: type=1130 audit(1768434271.310:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.333255 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 14 23:44:31.333463 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 14 23:44:31.335569 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 23:44:31.337313 systemd[1]: Stopped target timers.target - Timer Units. Jan 14 23:44:31.338905 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 14 23:44:31.340000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.339022 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 14 23:44:31.344316 kernel: audit: type=1131 audit(1768434271.340:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.343569 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 14 23:44:31.345420 systemd[1]: Stopped target basic.target - Basic System. Jan 14 23:44:31.346827 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 14 23:44:31.348218 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 14 23:44:31.349926 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 14 23:44:31.351578 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 14 23:44:31.353206 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 14 23:44:31.354847 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 14 23:44:31.356447 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 14 23:44:31.358092 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 14 23:44:31.359591 systemd[1]: Stopped target swap.target - Swaps. Jan 14 23:44:31.360862 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 14 23:44:31.361000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.360986 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 14 23:44:31.362940 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 14 23:44:31.364596 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 23:44:31.366238 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 14 23:44:31.370438 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 23:44:31.372000 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 14 23:44:31.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.372128 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 14 23:44:31.374603 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 14 23:44:31.375000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.374728 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 14 23:44:31.377000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.376329 systemd[1]: ignition-files.service: Deactivated successfully. Jan 14 23:44:31.376434 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 14 23:44:31.381000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.378828 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 14 23:44:31.379587 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 14 23:44:31.379712 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 23:44:31.384000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.382061 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 14 23:44:31.386000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.383638 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 14 23:44:31.388000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.383753 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 23:44:31.385305 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 14 23:44:31.385411 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 23:44:31.387147 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 14 23:44:31.387248 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 14 23:44:31.392038 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 14 23:44:31.394426 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 14 23:44:31.395000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.395000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.407509 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 14 23:44:31.411868 ignition[1187]: INFO : Ignition 2.22.0 Jan 14 23:44:31.411868 ignition[1187]: INFO : Stage: umount Jan 14 23:44:31.411868 ignition[1187]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 14 23:44:31.411868 ignition[1187]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 14 23:44:31.411868 ignition[1187]: INFO : umount: umount passed Jan 14 23:44:31.411868 ignition[1187]: INFO : Ignition finished successfully Jan 14 23:44:31.416000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.413511 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 14 23:44:31.417000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.415318 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 14 23:44:31.419000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.416738 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 14 23:44:31.420000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.416820 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 14 23:44:31.421000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.418394 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 14 23:44:31.418475 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 14 23:44:31.424000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.419478 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 14 23:44:31.419521 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 14 23:44:31.420944 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 14 23:44:31.420995 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 14 23:44:31.422385 systemd[1]: Stopped target network.target - Network. Jan 14 23:44:31.423700 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 14 23:44:31.423747 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 14 23:44:31.425185 systemd[1]: Stopped target paths.target - Path Units. Jan 14 23:44:31.426536 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 14 23:44:31.430333 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 23:44:31.432038 systemd[1]: Stopped target slices.target - Slice Units. Jan 14 23:44:31.433554 systemd[1]: Stopped target sockets.target - Socket Units. Jan 14 23:44:31.434914 systemd[1]: iscsid.socket: Deactivated successfully. Jan 14 23:44:31.440000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.434951 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 14 23:44:31.441000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.436370 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 14 23:44:31.443000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.436399 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 14 23:44:31.438095 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 14 23:44:31.438118 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 14 23:44:31.439548 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 14 23:44:31.439597 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 14 23:44:31.440943 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 14 23:44:31.440983 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 14 23:44:31.442276 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 14 23:44:31.442320 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 14 23:44:31.444089 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 14 23:44:31.445392 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 14 23:44:31.458000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.457044 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 14 23:44:31.457141 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 14 23:44:31.460623 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 14 23:44:31.461000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.460722 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 14 23:44:31.463000 audit: BPF prog-id=6 op=UNLOAD Jan 14 23:44:31.464907 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 14 23:44:31.465000 audit: BPF prog-id=9 op=UNLOAD Jan 14 23:44:31.465930 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 14 23:44:31.465969 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 14 23:44:31.468341 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 14 23:44:31.469794 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 14 23:44:31.469849 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 14 23:44:31.471000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.472000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.471495 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 14 23:44:31.474000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.471537 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 14 23:44:31.473005 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 14 23:44:31.473044 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 14 23:44:31.474868 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 23:44:31.486100 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 14 23:44:31.486232 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 23:44:31.487000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.488146 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 14 23:44:31.488193 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 14 23:44:31.490282 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 14 23:44:31.493000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.490315 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 23:44:31.491957 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 14 23:44:31.495000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.492003 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 14 23:44:31.494220 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 14 23:44:31.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.494287 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 14 23:44:31.496816 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 14 23:44:31.496866 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 14 23:44:31.502000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.500079 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 14 23:44:31.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.500999 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 14 23:44:31.505000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.501052 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 23:44:31.502759 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 14 23:44:31.502800 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 23:44:31.504625 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 14 23:44:31.504669 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 23:44:31.518214 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 14 23:44:31.518362 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 14 23:44:31.520000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.520000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.520482 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 14 23:44:31.521000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:31.520589 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 14 23:44:31.522553 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 14 23:44:31.524281 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 14 23:44:31.550378 systemd[1]: Switching root. Jan 14 23:44:31.588055 systemd-journald[416]: Journal stopped Jan 14 23:44:32.519684 systemd-journald[416]: Received SIGTERM from PID 1 (systemd). Jan 14 23:44:32.519763 kernel: SELinux: policy capability network_peer_controls=1 Jan 14 23:44:32.519783 kernel: SELinux: policy capability open_perms=1 Jan 14 23:44:32.519793 kernel: SELinux: policy capability extended_socket_class=1 Jan 14 23:44:32.519803 kernel: SELinux: policy capability always_check_network=0 Jan 14 23:44:32.519814 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 14 23:44:32.519824 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 14 23:44:32.519836 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 14 23:44:32.519852 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 14 23:44:32.519863 kernel: SELinux: policy capability userspace_initial_context=0 Jan 14 23:44:32.519873 systemd[1]: Successfully loaded SELinux policy in 71.576ms. Jan 14 23:44:32.519894 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 5.693ms. Jan 14 23:44:32.519909 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 14 23:44:32.519921 systemd[1]: Detected virtualization kvm. Jan 14 23:44:32.519934 systemd[1]: Detected architecture arm64. Jan 14 23:44:32.519944 systemd[1]: Detected first boot. Jan 14 23:44:32.519958 systemd[1]: Hostname set to . Jan 14 23:44:32.519968 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 14 23:44:32.519979 zram_generator::config[1233]: No configuration found. Jan 14 23:44:32.519996 kernel: NET: Registered PF_VSOCK protocol family Jan 14 23:44:32.520006 systemd[1]: Populated /etc with preset unit settings. Jan 14 23:44:32.520019 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 14 23:44:32.520030 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 14 23:44:32.520040 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 14 23:44:32.520052 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 14 23:44:32.520065 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 14 23:44:32.520076 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 14 23:44:32.520087 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 14 23:44:32.520099 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 14 23:44:32.520110 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 14 23:44:32.520121 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 14 23:44:32.520131 systemd[1]: Created slice user.slice - User and Session Slice. Jan 14 23:44:32.520145 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 14 23:44:32.520156 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 14 23:44:32.520167 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 14 23:44:32.520181 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 14 23:44:32.520191 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 14 23:44:32.520202 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 14 23:44:32.520213 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 14 23:44:32.520224 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 14 23:44:32.520237 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 14 23:44:32.520248 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 14 23:44:32.520259 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 14 23:44:32.520282 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 14 23:44:32.520294 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 14 23:44:32.520305 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 14 23:44:32.520318 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 14 23:44:32.520329 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 14 23:44:32.520340 systemd[1]: Reached target slices.target - Slice Units. Jan 14 23:44:32.520355 systemd[1]: Reached target swap.target - Swaps. Jan 14 23:44:32.520366 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 14 23:44:32.520377 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 14 23:44:32.520390 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 14 23:44:32.520401 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 14 23:44:32.520414 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 14 23:44:32.520425 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 14 23:44:32.520435 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 14 23:44:32.520446 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 14 23:44:32.520457 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 14 23:44:32.520467 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 14 23:44:32.521024 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 14 23:44:32.521042 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 14 23:44:32.521053 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 14 23:44:32.521064 systemd[1]: Mounting media.mount - External Media Directory... Jan 14 23:44:32.521075 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 14 23:44:32.521086 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 14 23:44:32.521097 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 14 23:44:32.521108 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 14 23:44:32.521121 systemd[1]: Reached target machines.target - Containers. Jan 14 23:44:32.521132 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 14 23:44:32.521144 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 23:44:32.521155 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 14 23:44:32.521167 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 14 23:44:32.521178 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 23:44:32.521191 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 23:44:32.521203 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 23:44:32.521214 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 14 23:44:32.521225 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 23:44:32.521238 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 14 23:44:32.521249 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 14 23:44:32.521260 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 14 23:44:32.521305 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 14 23:44:32.521317 systemd[1]: Stopped systemd-fsck-usr.service. Jan 14 23:44:32.521332 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 23:44:32.521344 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 14 23:44:32.521357 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 14 23:44:32.521371 kernel: fuse: init (API version 7.41) Jan 14 23:44:32.521383 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 14 23:44:32.521394 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 14 23:44:32.521404 kernel: ACPI: bus type drm_connector registered Jan 14 23:44:32.521414 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 14 23:44:32.521425 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 14 23:44:32.521437 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 14 23:44:32.521471 systemd-journald[1302]: Collecting audit messages is enabled. Jan 14 23:44:32.521499 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 14 23:44:32.521510 systemd[1]: Mounted media.mount - External Media Directory. Jan 14 23:44:32.521522 systemd-journald[1302]: Journal started Jan 14 23:44:32.521545 systemd-journald[1302]: Runtime Journal (/run/log/journal/c7cbdb0d0e7e47fe85347ae482ef18a3) is 8M, max 319.5M, 311.5M free. Jan 14 23:44:32.385000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 14 23:44:32.476000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:32.478000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:32.481000 audit: BPF prog-id=14 op=UNLOAD Jan 14 23:44:32.481000 audit: BPF prog-id=13 op=UNLOAD Jan 14 23:44:32.481000 audit: BPF prog-id=15 op=LOAD Jan 14 23:44:32.481000 audit: BPF prog-id=16 op=LOAD Jan 14 23:44:32.482000 audit: BPF prog-id=17 op=LOAD Jan 14 23:44:32.516000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 14 23:44:32.516000 audit[1302]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffd3a88aa0 a2=4000 a3=0 items=0 ppid=1 pid=1302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:32.516000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 14 23:44:32.299767 systemd[1]: Queued start job for default target multi-user.target. Jan 14 23:44:32.319627 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 14 23:44:32.320044 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 14 23:44:32.524302 systemd[1]: Started systemd-journald.service - Journal Service. Jan 14 23:44:32.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:32.525244 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 14 23:44:32.526343 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 14 23:44:32.527414 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 14 23:44:32.529369 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 14 23:44:32.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:32.530635 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 14 23:44:32.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:32.531991 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 14 23:44:32.532151 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 14 23:44:32.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:32.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:32.533590 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 23:44:32.533766 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 23:44:32.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:32.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:32.534959 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 23:44:32.535113 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 23:44:32.535000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:32.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:32.536411 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 23:44:32.536560 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 23:44:32.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:32.537000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:32.537811 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 14 23:44:32.537970 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 14 23:44:32.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:32.538000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:32.539156 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 23:44:32.539341 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 23:44:32.539000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:32.539000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:32.540589 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 14 23:44:32.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:32.541917 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 14 23:44:32.542000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:32.543878 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 14 23:44:32.544000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:32.545556 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 14 23:44:32.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:32.557853 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 14 23:44:32.559720 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 14 23:44:32.561749 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 14 23:44:32.563586 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 14 23:44:32.564567 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 14 23:44:32.564595 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 14 23:44:32.566255 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 14 23:44:32.567405 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 23:44:32.567510 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 23:44:32.572430 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 14 23:44:32.574206 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 14 23:44:32.576367 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 23:44:32.577235 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 14 23:44:32.578311 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 23:44:32.580183 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 14 23:44:32.582462 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 14 23:44:32.586306 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 14 23:44:32.588461 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 14 23:44:32.589253 systemd-journald[1302]: Time spent on flushing to /var/log/journal/c7cbdb0d0e7e47fe85347ae482ef18a3 is 37.110ms for 1816 entries. Jan 14 23:44:32.589253 systemd-journald[1302]: System Journal (/var/log/journal/c7cbdb0d0e7e47fe85347ae482ef18a3) is 8M, max 588.1M, 580.1M free. Jan 14 23:44:32.643189 systemd-journald[1302]: Received client request to flush runtime journal. Jan 14 23:44:32.643246 kernel: loop1: detected capacity change from 0 to 1648 Jan 14 23:44:32.594000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:32.617000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:32.624000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:32.590339 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 14 23:44:32.591874 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 14 23:44:32.595643 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 14 23:44:32.597856 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 14 23:44:32.616247 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 14 23:44:32.620923 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 14 23:44:32.645304 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 14 23:44:32.646000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:32.651293 kernel: loop2: detected capacity change from 0 to 109872 Jan 14 23:44:32.654293 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 14 23:44:32.655000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:32.655916 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 14 23:44:32.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:32.658000 audit: BPF prog-id=18 op=LOAD Jan 14 23:44:32.658000 audit: BPF prog-id=19 op=LOAD Jan 14 23:44:32.658000 audit: BPF prog-id=20 op=LOAD Jan 14 23:44:32.659286 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 14 23:44:32.660000 audit: BPF prog-id=21 op=LOAD Jan 14 23:44:32.661866 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 14 23:44:32.665412 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 14 23:44:32.675000 audit: BPF prog-id=22 op=LOAD Jan 14 23:44:32.675000 audit: BPF prog-id=23 op=LOAD Jan 14 23:44:32.676000 audit: BPF prog-id=24 op=LOAD Jan 14 23:44:32.676904 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 14 23:44:32.678000 audit: BPF prog-id=25 op=LOAD Jan 14 23:44:32.678000 audit: BPF prog-id=26 op=LOAD Jan 14 23:44:32.678000 audit: BPF prog-id=27 op=LOAD Jan 14 23:44:32.679476 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 14 23:44:32.693312 kernel: loop3: detected capacity change from 0 to 100192 Jan 14 23:44:32.697476 systemd-tmpfiles[1373]: ACLs are not supported, ignoring. Jan 14 23:44:32.697489 systemd-tmpfiles[1373]: ACLs are not supported, ignoring. Jan 14 23:44:32.701345 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 14 23:44:32.704000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:32.713906 systemd-nsresourced[1376]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 14 23:44:32.714929 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 14 23:44:32.715000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:32.740962 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 14 23:44:32.741000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:32.745300 kernel: loop4: detected capacity change from 0 to 207008 Jan 14 23:44:32.793497 systemd-oomd[1371]: No swap; memory pressure usage will be degraded Jan 14 23:44:32.795302 kernel: loop5: detected capacity change from 0 to 1648 Jan 14 23:44:32.795593 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 14 23:44:32.796000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:32.799182 systemd-resolved[1372]: Positive Trust Anchors: Jan 14 23:44:32.799204 systemd-resolved[1372]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 14 23:44:32.799207 systemd-resolved[1372]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 14 23:44:32.799238 systemd-resolved[1372]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 14 23:44:32.805310 kernel: loop6: detected capacity change from 0 to 109872 Jan 14 23:44:32.809971 systemd-resolved[1372]: Using system hostname 'ci-4515-1-0-n-1d3be4f164'. Jan 14 23:44:32.811665 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 14 23:44:32.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:32.814435 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 14 23:44:32.817300 kernel: loop7: detected capacity change from 0 to 100192 Jan 14 23:44:32.830314 kernel: loop1: detected capacity change from 0 to 207008 Jan 14 23:44:32.845995 (sd-merge)[1396]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-stackit.raw'. Jan 14 23:44:32.848908 (sd-merge)[1396]: Merged extensions into '/usr'. Jan 14 23:44:32.852633 systemd[1]: Reload requested from client PID 1353 ('systemd-sysext') (unit systemd-sysext.service)... Jan 14 23:44:32.852651 systemd[1]: Reloading... Jan 14 23:44:32.905357 zram_generator::config[1423]: No configuration found. Jan 14 23:44:33.059496 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 14 23:44:33.059941 systemd[1]: Reloading finished in 206 ms. Jan 14 23:44:33.091432 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 14 23:44:33.092000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:33.092712 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 14 23:44:33.093000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:33.117752 systemd[1]: Starting ensure-sysext.service... Jan 14 23:44:33.119379 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 14 23:44:33.120000 audit: BPF prog-id=8 op=UNLOAD Jan 14 23:44:33.120000 audit: BPF prog-id=7 op=UNLOAD Jan 14 23:44:33.120000 audit: BPF prog-id=28 op=LOAD Jan 14 23:44:33.120000 audit: BPF prog-id=29 op=LOAD Jan 14 23:44:33.121401 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 14 23:44:33.123000 audit: BPF prog-id=30 op=LOAD Jan 14 23:44:33.123000 audit: BPF prog-id=25 op=UNLOAD Jan 14 23:44:33.123000 audit: BPF prog-id=31 op=LOAD Jan 14 23:44:33.123000 audit: BPF prog-id=32 op=LOAD Jan 14 23:44:33.123000 audit: BPF prog-id=26 op=UNLOAD Jan 14 23:44:33.123000 audit: BPF prog-id=27 op=UNLOAD Jan 14 23:44:33.123000 audit: BPF prog-id=33 op=LOAD Jan 14 23:44:33.123000 audit: BPF prog-id=21 op=UNLOAD Jan 14 23:44:33.124000 audit: BPF prog-id=34 op=LOAD Jan 14 23:44:33.124000 audit: BPF prog-id=22 op=UNLOAD Jan 14 23:44:33.124000 audit: BPF prog-id=35 op=LOAD Jan 14 23:44:33.124000 audit: BPF prog-id=36 op=LOAD Jan 14 23:44:33.124000 audit: BPF prog-id=23 op=UNLOAD Jan 14 23:44:33.124000 audit: BPF prog-id=24 op=UNLOAD Jan 14 23:44:33.124000 audit: BPF prog-id=37 op=LOAD Jan 14 23:44:33.124000 audit: BPF prog-id=15 op=UNLOAD Jan 14 23:44:33.125000 audit: BPF prog-id=38 op=LOAD Jan 14 23:44:33.125000 audit: BPF prog-id=39 op=LOAD Jan 14 23:44:33.125000 audit: BPF prog-id=16 op=UNLOAD Jan 14 23:44:33.125000 audit: BPF prog-id=17 op=UNLOAD Jan 14 23:44:33.125000 audit: BPF prog-id=40 op=LOAD Jan 14 23:44:33.125000 audit: BPF prog-id=18 op=UNLOAD Jan 14 23:44:33.125000 audit: BPF prog-id=41 op=LOAD Jan 14 23:44:33.125000 audit: BPF prog-id=42 op=LOAD Jan 14 23:44:33.125000 audit: BPF prog-id=19 op=UNLOAD Jan 14 23:44:33.125000 audit: BPF prog-id=20 op=UNLOAD Jan 14 23:44:33.132868 systemd[1]: Reload requested from client PID 1463 ('systemctl') (unit ensure-sysext.service)... Jan 14 23:44:33.132884 systemd[1]: Reloading... Jan 14 23:44:33.135959 systemd-tmpfiles[1464]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 14 23:44:33.136245 systemd-tmpfiles[1464]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 14 23:44:33.136535 systemd-tmpfiles[1464]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 14 23:44:33.137516 systemd-tmpfiles[1464]: ACLs are not supported, ignoring. Jan 14 23:44:33.137579 systemd-tmpfiles[1464]: ACLs are not supported, ignoring. Jan 14 23:44:33.146069 systemd-tmpfiles[1464]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 23:44:33.146085 systemd-tmpfiles[1464]: Skipping /boot Jan 14 23:44:33.152665 systemd-tmpfiles[1464]: Detected autofs mount point /boot during canonicalization of boot. Jan 14 23:44:33.152682 systemd-tmpfiles[1464]: Skipping /boot Jan 14 23:44:33.154407 systemd-udevd[1465]: Using default interface naming scheme 'v257'. Jan 14 23:44:33.185376 zram_generator::config[1496]: No configuration found. Jan 14 23:44:33.279288 kernel: mousedev: PS/2 mouse device common for all mice Jan 14 23:44:33.365278 kernel: [drm] pci: virtio-gpu-pci detected at 0000:06:00.0 Jan 14 23:44:33.365362 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 14 23:44:33.365377 kernel: [drm] features: -context_init Jan 14 23:44:33.368729 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 14 23:44:33.369910 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 14 23:44:33.370367 systemd[1]: Reloading finished in 237 ms. Jan 14 23:44:33.387288 kernel: [drm] number of scanouts: 1 Jan 14 23:44:33.387331 kernel: [drm] number of cap sets: 0 Jan 14 23:44:33.395754 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 14 23:44:33.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:33.399000 audit: BPF prog-id=43 op=LOAD Jan 14 23:44:33.399000 audit: BPF prog-id=44 op=LOAD Jan 14 23:44:33.399000 audit: BPF prog-id=45 op=LOAD Jan 14 23:44:33.399000 audit: BPF prog-id=46 op=LOAD Jan 14 23:44:33.399000 audit: BPF prog-id=47 op=LOAD Jan 14 23:44:33.399000 audit: BPF prog-id=48 op=LOAD Jan 14 23:44:33.400000 audit: BPF prog-id=30 op=UNLOAD Jan 14 23:44:33.400000 audit: BPF prog-id=31 op=UNLOAD Jan 14 23:44:33.400000 audit: BPF prog-id=32 op=UNLOAD Jan 14 23:44:33.400000 audit: BPF prog-id=37 op=UNLOAD Jan 14 23:44:33.400000 audit: BPF prog-id=38 op=UNLOAD Jan 14 23:44:33.400000 audit: BPF prog-id=39 op=UNLOAD Jan 14 23:44:33.405000 audit: BPF prog-id=49 op=LOAD Jan 14 23:44:33.406278 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:06:00.0 on minor 0 Jan 14 23:44:33.406306 kernel: kauditd_printk_skb: 148 callbacks suppressed Jan 14 23:44:33.406338 kernel: audit: type=1334 audit(1768434273.405:194): prog-id=49 op=LOAD Jan 14 23:44:33.413663 kernel: Console: switching to colour frame buffer device 160x50 Jan 14 23:44:33.417290 kernel: audit: type=1334 audit(1768434273.414:195): prog-id=34 op=UNLOAD Jan 14 23:44:33.417361 kernel: audit: type=1334 audit(1768434273.414:196): prog-id=50 op=LOAD Jan 14 23:44:33.417376 kernel: audit: type=1334 audit(1768434273.414:197): prog-id=51 op=LOAD Jan 14 23:44:33.417387 kernel: audit: type=1334 audit(1768434273.414:198): prog-id=35 op=UNLOAD Jan 14 23:44:33.417403 kernel: audit: type=1334 audit(1768434273.414:199): prog-id=36 op=UNLOAD Jan 14 23:44:33.417428 kernel: audit: type=1334 audit(1768434273.414:200): prog-id=52 op=LOAD Jan 14 23:44:33.417443 kernel: audit: type=1334 audit(1768434273.414:201): prog-id=40 op=UNLOAD Jan 14 23:44:33.417457 kernel: audit: type=1334 audit(1768434273.415:202): prog-id=53 op=LOAD Jan 14 23:44:33.417472 kernel: audit: type=1334 audit(1768434273.415:203): prog-id=54 op=LOAD Jan 14 23:44:33.414000 audit: BPF prog-id=34 op=UNLOAD Jan 14 23:44:33.414000 audit: BPF prog-id=50 op=LOAD Jan 14 23:44:33.414000 audit: BPF prog-id=51 op=LOAD Jan 14 23:44:33.414000 audit: BPF prog-id=35 op=UNLOAD Jan 14 23:44:33.414000 audit: BPF prog-id=36 op=UNLOAD Jan 14 23:44:33.414000 audit: BPF prog-id=52 op=LOAD Jan 14 23:44:33.414000 audit: BPF prog-id=40 op=UNLOAD Jan 14 23:44:33.415000 audit: BPF prog-id=53 op=LOAD Jan 14 23:44:33.415000 audit: BPF prog-id=54 op=LOAD Jan 14 23:44:33.415000 audit: BPF prog-id=41 op=UNLOAD Jan 14 23:44:33.415000 audit: BPF prog-id=42 op=UNLOAD Jan 14 23:44:33.415000 audit: BPF prog-id=55 op=LOAD Jan 14 23:44:33.415000 audit: BPF prog-id=33 op=UNLOAD Jan 14 23:44:33.416000 audit: BPF prog-id=56 op=LOAD Jan 14 23:44:33.416000 audit: BPF prog-id=57 op=LOAD Jan 14 23:44:33.416000 audit: BPF prog-id=28 op=UNLOAD Jan 14 23:44:33.416000 audit: BPF prog-id=29 op=UNLOAD Jan 14 23:44:33.430541 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 14 23:44:33.437322 kernel: virtio-pci 0000:06:00.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 14 23:44:33.438000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:33.461335 systemd[1]: Finished ensure-sysext.service. Jan 14 23:44:33.461000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:33.477937 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 23:44:33.479799 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 14 23:44:33.480990 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 14 23:44:33.482196 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 14 23:44:33.491595 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 14 23:44:33.493378 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 14 23:44:33.497427 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 14 23:44:33.499575 systemd[1]: Starting modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm... Jan 14 23:44:33.500829 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 14 23:44:33.500950 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 14 23:44:33.501933 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 14 23:44:33.504244 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 14 23:44:33.507430 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 14 23:44:33.509399 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 14 23:44:33.511000 audit: BPF prog-id=58 op=LOAD Jan 14 23:44:33.512031 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 14 23:44:33.513013 systemd[1]: Reached target time-set.target - System Time Set. Jan 14 23:44:33.515418 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 14 23:44:33.516319 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 14 23:44:33.516347 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 14 23:44:33.518585 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 14 23:44:33.520572 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 14 23:44:33.522319 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 14 23:44:33.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:33.524000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:33.524817 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 14 23:44:33.525001 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 14 23:44:33.525368 kernel: PTP clock support registered Jan 14 23:44:33.525000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:33.525000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:33.526457 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 14 23:44:33.526680 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 14 23:44:33.529000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:33.529000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:33.529642 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 14 23:44:33.533501 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 14 23:44:33.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:33.534000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:33.534927 systemd[1]: modprobe@ptp_kvm.service: Deactivated successfully. Jan 14 23:44:33.535137 systemd[1]: Finished modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm. Jan 14 23:44:33.536000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@ptp_kvm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:33.536000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@ptp_kvm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:33.537365 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 14 23:44:33.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:33.540000 audit[1604]: SYSTEM_BOOT pid=1604 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 14 23:44:33.546871 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 14 23:44:33.547029 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 14 23:44:33.553320 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 14 23:44:33.554000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:33.560692 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 14 23:44:33.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:33.570000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 14 23:44:33.570000 audit[1631]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=fffff7af9db0 a2=420 a3=0 items=0 ppid=1584 pid=1631 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:33.570000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 23:44:33.571072 augenrules[1631]: No rules Jan 14 23:44:33.572233 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 23:44:33.572810 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 23:44:33.609942 systemd-networkd[1602]: lo: Link UP Jan 14 23:44:33.609951 systemd-networkd[1602]: lo: Gained carrier Jan 14 23:44:33.611520 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 14 23:44:33.612677 systemd[1]: Reached target network.target - Network. Jan 14 23:44:33.613466 systemd-networkd[1602]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 23:44:33.613479 systemd-networkd[1602]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 14 23:44:33.614584 systemd-networkd[1602]: eth0: Link UP Jan 14 23:44:33.614729 systemd-networkd[1602]: eth0: Gained carrier Jan 14 23:44:33.614742 systemd-networkd[1602]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 14 23:44:33.616431 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 14 23:44:33.618596 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 14 23:44:33.621324 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 14 23:44:33.622884 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 14 23:44:33.626340 systemd-networkd[1602]: eth0: DHCPv4 address 10.0.22.230/25, gateway 10.0.22.129 acquired from 10.0.22.129 Jan 14 23:44:33.626812 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 14 23:44:33.643825 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 14 23:44:34.191902 ldconfig[1595]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 14 23:44:34.197638 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 14 23:44:34.202067 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 14 23:44:34.227032 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 14 23:44:34.228263 systemd[1]: Reached target sysinit.target - System Initialization. Jan 14 23:44:34.229299 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 14 23:44:34.230311 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 14 23:44:34.231470 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 14 23:44:34.232411 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 14 23:44:34.233445 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 14 23:44:34.234499 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 14 23:44:34.235441 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 14 23:44:34.236427 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 14 23:44:34.236458 systemd[1]: Reached target paths.target - Path Units. Jan 14 23:44:34.237169 systemd[1]: Reached target timers.target - Timer Units. Jan 14 23:44:34.239560 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 14 23:44:34.241660 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 14 23:44:34.244191 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 14 23:44:34.245562 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 14 23:44:34.247133 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 14 23:44:34.251257 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 14 23:44:34.252395 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 14 23:44:34.253858 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 14 23:44:34.254913 systemd[1]: Reached target sockets.target - Socket Units. Jan 14 23:44:34.255762 systemd[1]: Reached target basic.target - Basic System. Jan 14 23:44:34.256576 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 14 23:44:34.256609 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 14 23:44:34.262680 systemd[1]: Starting chronyd.service - NTP client/server... Jan 14 23:44:34.264254 systemd[1]: Starting containerd.service - containerd container runtime... Jan 14 23:44:34.266248 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 14 23:44:34.269466 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 14 23:44:34.271156 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 14 23:44:34.274295 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 23:44:34.276426 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 14 23:44:34.278186 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 14 23:44:34.279195 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 14 23:44:34.281605 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 14 23:44:34.291585 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 14 23:44:34.295359 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 14 23:44:34.296050 jq[1659]: false Jan 14 23:44:34.297234 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 14 23:44:34.300211 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 14 23:44:34.301143 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 14 23:44:34.301536 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 14 23:44:34.302241 systemd[1]: Starting update-engine.service - Update Engine... Jan 14 23:44:34.306369 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 14 23:44:34.308817 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 14 23:44:34.310432 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 14 23:44:34.310665 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 14 23:44:34.310934 systemd[1]: motdgen.service: Deactivated successfully. Jan 14 23:44:34.311112 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 14 23:44:34.312006 jq[1676]: true Jan 14 23:44:34.318226 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 14 23:44:34.319580 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 14 23:44:34.322602 extend-filesystems[1660]: Found /dev/vda6 Jan 14 23:44:34.329159 jq[1681]: true Jan 14 23:44:34.332582 chronyd[1652]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 14 23:44:34.333985 chronyd[1652]: Loaded seccomp filter (level 2) Jan 14 23:44:34.334153 systemd[1]: Started chronyd.service - NTP client/server. Jan 14 23:44:34.339787 extend-filesystems[1660]: Found /dev/vda9 Jan 14 23:44:34.341113 extend-filesystems[1660]: Checking size of /dev/vda9 Jan 14 23:44:34.347284 tar[1680]: linux-arm64/LICENSE Jan 14 23:44:34.347284 tar[1680]: linux-arm64/helm Jan 14 23:44:34.359461 extend-filesystems[1660]: Resized partition /dev/vda9 Jan 14 23:44:34.363307 extend-filesystems[1711]: resize2fs 1.47.3 (8-Jul-2025) Jan 14 23:44:34.371274 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 11516923 blocks Jan 14 23:44:34.380166 dbus-daemon[1655]: [system] SELinux support is enabled Jan 14 23:44:34.380499 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 14 23:44:34.383762 update_engine[1674]: I20260114 23:44:34.383383 1674 main.cc:92] Flatcar Update Engine starting Jan 14 23:44:34.385247 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 14 23:44:34.385295 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 14 23:44:34.387358 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 14 23:44:34.387387 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 14 23:44:34.391865 systemd-logind[1670]: New seat seat0. Jan 14 23:44:34.394584 systemd[1]: Started update-engine.service - Update Engine. Jan 14 23:44:34.397306 update_engine[1674]: I20260114 23:44:34.394981 1674 update_check_scheduler.cc:74] Next update check in 7m6s Jan 14 23:44:34.398015 systemd-logind[1670]: Watching system buttons on /dev/input/event0 (Power Button) Jan 14 23:44:34.398043 systemd-logind[1670]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 14 23:44:34.400567 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 14 23:44:34.403726 systemd[1]: Started systemd-logind.service - User Login Management. Jan 14 23:44:34.466704 locksmithd[1724]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 14 23:44:34.495161 bash[1722]: Updated "/home/core/.ssh/authorized_keys" Jan 14 23:44:34.498533 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 14 23:44:34.504018 systemd[1]: Starting sshkeys.service... Jan 14 23:44:34.511161 sshd_keygen[1698]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 14 23:44:34.511458 containerd[1695]: time="2026-01-14T23:44:34Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 14 23:44:34.512569 containerd[1695]: time="2026-01-14T23:44:34.512531160Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 14 23:44:34.520744 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 14 23:44:34.524538 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 14 23:44:34.530599 containerd[1695]: time="2026-01-14T23:44:34.530546440Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.44µs" Jan 14 23:44:34.530599 containerd[1695]: time="2026-01-14T23:44:34.530587920Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 14 23:44:34.530687 containerd[1695]: time="2026-01-14T23:44:34.530630040Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 14 23:44:34.530687 containerd[1695]: time="2026-01-14T23:44:34.530644000Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 14 23:44:34.530802 containerd[1695]: time="2026-01-14T23:44:34.530780680Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 14 23:44:34.530829 containerd[1695]: time="2026-01-14T23:44:34.530801480Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 23:44:34.530874 containerd[1695]: time="2026-01-14T23:44:34.530855200Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 14 23:44:34.530874 containerd[1695]: time="2026-01-14T23:44:34.530869280Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 23:44:34.531181 containerd[1695]: time="2026-01-14T23:44:34.531152280Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 14 23:44:34.531181 containerd[1695]: time="2026-01-14T23:44:34.531169800Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 23:44:34.531181 containerd[1695]: time="2026-01-14T23:44:34.531181600Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 14 23:44:34.531181 containerd[1695]: time="2026-01-14T23:44:34.531189760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 23:44:34.531482 containerd[1695]: time="2026-01-14T23:44:34.531457000Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 14 23:44:34.531540 containerd[1695]: time="2026-01-14T23:44:34.531520440Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 14 23:44:34.531747 containerd[1695]: time="2026-01-14T23:44:34.531678720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 14 23:44:34.532035 containerd[1695]: time="2026-01-14T23:44:34.532004360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 23:44:34.532106 containerd[1695]: time="2026-01-14T23:44:34.532040960Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 14 23:44:34.532139 containerd[1695]: time="2026-01-14T23:44:34.532103200Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 14 23:44:34.532158 containerd[1695]: time="2026-01-14T23:44:34.532142080Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 14 23:44:34.532642 containerd[1695]: time="2026-01-14T23:44:34.532580400Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 14 23:44:34.532718 containerd[1695]: time="2026-01-14T23:44:34.532699480Z" level=info msg="metadata content store policy set" policy=shared Jan 14 23:44:34.547389 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 14 23:44:34.550315 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 23:44:34.550343 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 14 23:44:34.562387 systemd[1]: issuegen.service: Deactivated successfully. Jan 14 23:44:34.562710 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 14 23:44:34.566166 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 14 23:44:34.588287 containerd[1695]: time="2026-01-14T23:44:34.588018560Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 14 23:44:34.588287 containerd[1695]: time="2026-01-14T23:44:34.588228960Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 23:44:34.588596 containerd[1695]: time="2026-01-14T23:44:34.588561680Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 14 23:44:34.588646 containerd[1695]: time="2026-01-14T23:44:34.588610680Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 14 23:44:34.588646 containerd[1695]: time="2026-01-14T23:44:34.588629880Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 14 23:44:34.588681 containerd[1695]: time="2026-01-14T23:44:34.588644440Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 14 23:44:34.588681 containerd[1695]: time="2026-01-14T23:44:34.588656360Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 14 23:44:34.588681 containerd[1695]: time="2026-01-14T23:44:34.588666120Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 14 23:44:34.588763 containerd[1695]: time="2026-01-14T23:44:34.588741440Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 14 23:44:34.588788 containerd[1695]: time="2026-01-14T23:44:34.588764160Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 14 23:44:34.588788 containerd[1695]: time="2026-01-14T23:44:34.588776880Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 14 23:44:34.588820 containerd[1695]: time="2026-01-14T23:44:34.588789960Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 14 23:44:34.588820 containerd[1695]: time="2026-01-14T23:44:34.588799480Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 14 23:44:34.588856 containerd[1695]: time="2026-01-14T23:44:34.588821440Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 14 23:44:34.589100 containerd[1695]: time="2026-01-14T23:44:34.589071840Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 14 23:44:34.589140 containerd[1695]: time="2026-01-14T23:44:34.589106440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 14 23:44:34.589191 containerd[1695]: time="2026-01-14T23:44:34.589171880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 14 23:44:34.589228 containerd[1695]: time="2026-01-14T23:44:34.589191480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 14 23:44:34.589228 containerd[1695]: time="2026-01-14T23:44:34.589203520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 14 23:44:34.589228 containerd[1695]: time="2026-01-14T23:44:34.589212840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 14 23:44:34.589293 containerd[1695]: time="2026-01-14T23:44:34.589225080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 14 23:44:34.589293 containerd[1695]: time="2026-01-14T23:44:34.589249800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 14 23:44:34.589335 containerd[1695]: time="2026-01-14T23:44:34.589263080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 14 23:44:34.589335 containerd[1695]: time="2026-01-14T23:44:34.589331080Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 14 23:44:34.589372 containerd[1695]: time="2026-01-14T23:44:34.589342280Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 14 23:44:34.589391 containerd[1695]: time="2026-01-14T23:44:34.589370600Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 14 23:44:34.590031 containerd[1695]: time="2026-01-14T23:44:34.589423520Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 14 23:44:34.590031 containerd[1695]: time="2026-01-14T23:44:34.589445160Z" level=info msg="Start snapshots syncer" Jan 14 23:44:34.590031 containerd[1695]: time="2026-01-14T23:44:34.589521840Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 14 23:44:34.590143 containerd[1695]: time="2026-01-14T23:44:34.590031680Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 14 23:44:34.590143 containerd[1695]: time="2026-01-14T23:44:34.590080160Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 14 23:44:34.590247 containerd[1695]: time="2026-01-14T23:44:34.590133600Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 14 23:44:34.590247 containerd[1695]: time="2026-01-14T23:44:34.590234880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 14 23:44:34.590313 containerd[1695]: time="2026-01-14T23:44:34.590255560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 14 23:44:34.590313 containerd[1695]: time="2026-01-14T23:44:34.590281480Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 14 23:44:34.590313 containerd[1695]: time="2026-01-14T23:44:34.590302640Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 14 23:44:34.590363 containerd[1695]: time="2026-01-14T23:44:34.590317160Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 14 23:44:34.590363 containerd[1695]: time="2026-01-14T23:44:34.590327680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 14 23:44:34.590363 containerd[1695]: time="2026-01-14T23:44:34.590337840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 14 23:44:34.590363 containerd[1695]: time="2026-01-14T23:44:34.590348520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 14 23:44:34.590363 containerd[1695]: time="2026-01-14T23:44:34.590358680Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 14 23:44:34.590444 containerd[1695]: time="2026-01-14T23:44:34.590392520Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 23:44:34.590444 containerd[1695]: time="2026-01-14T23:44:34.590405640Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 14 23:44:34.590444 containerd[1695]: time="2026-01-14T23:44:34.590413680Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 23:44:34.590444 containerd[1695]: time="2026-01-14T23:44:34.590422280Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 14 23:44:34.590444 containerd[1695]: time="2026-01-14T23:44:34.590430440Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 14 23:44:34.590444 containerd[1695]: time="2026-01-14T23:44:34.590439600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 14 23:44:34.590563 containerd[1695]: time="2026-01-14T23:44:34.590449440Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 14 23:44:34.590563 containerd[1695]: time="2026-01-14T23:44:34.590461080Z" level=info msg="runtime interface created" Jan 14 23:44:34.590563 containerd[1695]: time="2026-01-14T23:44:34.590465920Z" level=info msg="created NRI interface" Jan 14 23:44:34.590563 containerd[1695]: time="2026-01-14T23:44:34.590473560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 14 23:44:34.590563 containerd[1695]: time="2026-01-14T23:44:34.590484880Z" level=info msg="Connect containerd service" Jan 14 23:44:34.590563 containerd[1695]: time="2026-01-14T23:44:34.590505320Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 14 23:44:34.591190 containerd[1695]: time="2026-01-14T23:44:34.591150040Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 23:44:34.593622 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 14 23:44:34.597373 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 14 23:44:34.599635 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 14 23:44:34.601178 systemd[1]: Reached target getty.target - Login Prompts. Jan 14 23:44:34.684454 systemd-networkd[1602]: eth0: Gained IPv6LL Jan 14 23:44:34.688834 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 14 23:44:34.691927 systemd[1]: Reached target network-online.target - Network is Online. Jan 14 23:44:34.695505 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:44:34.698827 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 14 23:44:34.735229 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 14 23:44:34.753749 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 14 23:44:34.756294 systemd[1]: Started sshd@0-10.0.22.230:22-20.161.92.111:45300.service - OpenSSH per-connection server daemon (20.161.92.111:45300). Jan 14 23:44:34.773285 kernel: EXT4-fs (vda9): resized filesystem to 11516923 Jan 14 23:44:34.776000 containerd[1695]: time="2026-01-14T23:44:34.775945000Z" level=info msg="Start subscribing containerd event" Jan 14 23:44:34.791557 containerd[1695]: time="2026-01-14T23:44:34.776085160Z" level=info msg="Start recovering state" Jan 14 23:44:34.791557 containerd[1695]: time="2026-01-14T23:44:34.776196160Z" level=info msg="Start event monitor" Jan 14 23:44:34.791557 containerd[1695]: time="2026-01-14T23:44:34.776220240Z" level=info msg="Start cni network conf syncer for default" Jan 14 23:44:34.791557 containerd[1695]: time="2026-01-14T23:44:34.776233840Z" level=info msg="Start streaming server" Jan 14 23:44:34.791557 containerd[1695]: time="2026-01-14T23:44:34.776251640Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 14 23:44:34.791557 containerd[1695]: time="2026-01-14T23:44:34.776259800Z" level=info msg="runtime interface starting up..." Jan 14 23:44:34.791557 containerd[1695]: time="2026-01-14T23:44:34.776342480Z" level=info msg="starting plugins..." Jan 14 23:44:34.791557 containerd[1695]: time="2026-01-14T23:44:34.776362080Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 14 23:44:34.791557 containerd[1695]: time="2026-01-14T23:44:34.776833400Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 14 23:44:34.791557 containerd[1695]: time="2026-01-14T23:44:34.776888560Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 14 23:44:34.791557 containerd[1695]: time="2026-01-14T23:44:34.776985600Z" level=info msg="containerd successfully booted in 0.265889s" Jan 14 23:44:34.777126 systemd[1]: Started containerd.service - containerd container runtime. Jan 14 23:44:34.797416 extend-filesystems[1711]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 14 23:44:34.797416 extend-filesystems[1711]: old_desc_blocks = 1, new_desc_blocks = 6 Jan 14 23:44:34.797416 extend-filesystems[1711]: The filesystem on /dev/vda9 is now 11516923 (4k) blocks long. Jan 14 23:44:34.801045 extend-filesystems[1660]: Resized filesystem in /dev/vda9 Jan 14 23:44:34.799202 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 14 23:44:34.801808 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 14 23:44:34.841971 tar[1680]: linux-arm64/README.md Jan 14 23:44:34.859242 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 14 23:44:35.286316 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 23:44:35.317504 sshd[1784]: Accepted publickey for core from 20.161.92.111 port 45300 ssh2: RSA SHA256:2pPTL0V6h0nrRdf8E8LR7uYjIY+dfolij8SaSnrdjVo Jan 14 23:44:35.319396 sshd-session[1784]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:44:35.328966 systemd-logind[1670]: New session 1 of user core. Jan 14 23:44:35.330581 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 14 23:44:35.333017 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 14 23:44:35.360409 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 14 23:44:35.363700 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 14 23:44:35.384751 (systemd)[1797]: pam_unix(systemd-user:session): session opened for user core(uid=500) by (uid=0) Jan 14 23:44:35.387080 systemd-logind[1670]: New session c1 of user core. Jan 14 23:44:35.508990 systemd[1797]: Queued start job for default target default.target. Jan 14 23:44:35.529459 systemd[1797]: Created slice app.slice - User Application Slice. Jan 14 23:44:35.529582 systemd[1797]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 14 23:44:35.529597 systemd[1797]: Reached target paths.target - Paths. Jan 14 23:44:35.529653 systemd[1797]: Reached target timers.target - Timers. Jan 14 23:44:35.531049 systemd[1797]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 14 23:44:35.531867 systemd[1797]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 14 23:44:35.540488 systemd[1797]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 14 23:44:35.540539 systemd[1797]: Reached target sockets.target - Sockets. Jan 14 23:44:35.546312 systemd[1797]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 14 23:44:35.546425 systemd[1797]: Reached target basic.target - Basic System. Jan 14 23:44:35.546484 systemd[1797]: Reached target default.target - Main User Target. Jan 14 23:44:35.546525 systemd[1797]: Startup finished in 153ms. Jan 14 23:44:35.546641 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 14 23:44:35.554855 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 14 23:44:35.562318 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 23:44:35.564051 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:44:35.567977 (kubelet)[1814]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 23:44:35.862635 systemd[1]: Started sshd@1-10.0.22.230:22-20.161.92.111:38092.service - OpenSSH per-connection server daemon (20.161.92.111:38092). Jan 14 23:44:36.062703 kubelet[1814]: E0114 23:44:36.062624 1814 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 23:44:36.064887 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 23:44:36.065026 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 23:44:36.065409 systemd[1]: kubelet.service: Consumed 758ms CPU time, 257.6M memory peak. Jan 14 23:44:36.412310 sshd[1822]: Accepted publickey for core from 20.161.92.111 port 38092 ssh2: RSA SHA256:2pPTL0V6h0nrRdf8E8LR7uYjIY+dfolij8SaSnrdjVo Jan 14 23:44:36.412994 sshd-session[1822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:44:36.416943 systemd-logind[1670]: New session 2 of user core. Jan 14 23:44:36.426678 systemd[1]: Started session-2.scope - Session 2 of User core. Jan 14 23:44:36.713989 sshd[1827]: Connection closed by 20.161.92.111 port 38092 Jan 14 23:44:36.714317 sshd-session[1822]: pam_unix(sshd:session): session closed for user core Jan 14 23:44:36.717843 systemd[1]: sshd@1-10.0.22.230:22-20.161.92.111:38092.service: Deactivated successfully. Jan 14 23:44:36.719543 systemd[1]: session-2.scope: Deactivated successfully. Jan 14 23:44:36.720327 systemd-logind[1670]: Session 2 logged out. Waiting for processes to exit. Jan 14 23:44:36.721171 systemd-logind[1670]: Removed session 2. Jan 14 23:44:36.824892 systemd[1]: Started sshd@2-10.0.22.230:22-20.161.92.111:38096.service - OpenSSH per-connection server daemon (20.161.92.111:38096). Jan 14 23:44:37.293311 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 23:44:37.366330 sshd[1833]: Accepted publickey for core from 20.161.92.111 port 38096 ssh2: RSA SHA256:2pPTL0V6h0nrRdf8E8LR7uYjIY+dfolij8SaSnrdjVo Jan 14 23:44:37.367063 sshd-session[1833]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:44:37.371542 systemd-logind[1670]: New session 3 of user core. Jan 14 23:44:37.381455 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 14 23:44:37.574351 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 23:44:37.659214 sshd[1837]: Connection closed by 20.161.92.111 port 38096 Jan 14 23:44:37.659866 sshd-session[1833]: pam_unix(sshd:session): session closed for user core Jan 14 23:44:37.663941 systemd[1]: sshd@2-10.0.22.230:22-20.161.92.111:38096.service: Deactivated successfully. Jan 14 23:44:37.665580 systemd[1]: session-3.scope: Deactivated successfully. Jan 14 23:44:37.666427 systemd-logind[1670]: Session 3 logged out. Waiting for processes to exit. Jan 14 23:44:37.667571 systemd-logind[1670]: Removed session 3. Jan 14 23:44:41.300320 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 23:44:41.306674 coreos-metadata[1654]: Jan 14 23:44:41.306 WARN failed to locate config-drive, using the metadata service API instead Jan 14 23:44:41.322250 coreos-metadata[1654]: Jan 14 23:44:41.322 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 14 23:44:41.591305 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 14 23:44:41.598606 coreos-metadata[1744]: Jan 14 23:44:41.598 WARN failed to locate config-drive, using the metadata service API instead Jan 14 23:44:41.611278 coreos-metadata[1744]: Jan 14 23:44:41.611 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 14 23:44:41.717733 coreos-metadata[1654]: Jan 14 23:44:41.717 INFO Fetch successful Jan 14 23:44:41.717809 coreos-metadata[1654]: Jan 14 23:44:41.717 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 14 23:44:41.888946 coreos-metadata[1744]: Jan 14 23:44:41.888 INFO Fetch successful Jan 14 23:44:41.888946 coreos-metadata[1744]: Jan 14 23:44:41.888 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 14 23:44:41.991363 coreos-metadata[1654]: Jan 14 23:44:41.991 INFO Fetch successful Jan 14 23:44:41.991363 coreos-metadata[1654]: Jan 14 23:44:41.991 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 14 23:44:42.159236 coreos-metadata[1744]: Jan 14 23:44:42.159 INFO Fetch successful Jan 14 23:44:42.160912 unknown[1744]: wrote ssh authorized keys file for user: core Jan 14 23:44:42.166462 coreos-metadata[1654]: Jan 14 23:44:42.166 INFO Fetch successful Jan 14 23:44:42.166462 coreos-metadata[1654]: Jan 14 23:44:42.166 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 14 23:44:42.192014 update-ssh-keys[1852]: Updated "/home/core/.ssh/authorized_keys" Jan 14 23:44:42.193035 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 14 23:44:42.196179 systemd[1]: Finished sshkeys.service. Jan 14 23:44:42.316169 coreos-metadata[1654]: Jan 14 23:44:42.316 INFO Fetch successful Jan 14 23:44:42.316169 coreos-metadata[1654]: Jan 14 23:44:42.316 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 14 23:44:42.453582 coreos-metadata[1654]: Jan 14 23:44:42.453 INFO Fetch successful Jan 14 23:44:42.453582 coreos-metadata[1654]: Jan 14 23:44:42.453 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 14 23:44:42.594919 coreos-metadata[1654]: Jan 14 23:44:42.594 INFO Fetch successful Jan 14 23:44:42.637480 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 14 23:44:42.637913 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 14 23:44:42.638050 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 14 23:44:42.641395 systemd[1]: Startup finished in 2.515s (kernel) + 13.531s (initrd) + 10.949s (userspace) = 26.996s. Jan 14 23:44:46.254242 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 14 23:44:46.255776 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:44:46.405044 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:44:46.408917 (kubelet)[1868]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 23:44:46.444250 kubelet[1868]: E0114 23:44:46.444179 1868 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 23:44:46.447069 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 23:44:46.447201 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 23:44:46.447567 systemd[1]: kubelet.service: Consumed 142ms CPU time, 107.8M memory peak. Jan 14 23:44:47.771572 systemd[1]: Started sshd@3-10.0.22.230:22-20.161.92.111:32898.service - OpenSSH per-connection server daemon (20.161.92.111:32898). Jan 14 23:44:48.304178 sshd[1877]: Accepted publickey for core from 20.161.92.111 port 32898 ssh2: RSA SHA256:2pPTL0V6h0nrRdf8E8LR7uYjIY+dfolij8SaSnrdjVo Jan 14 23:44:48.304953 sshd-session[1877]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:44:48.308644 systemd-logind[1670]: New session 4 of user core. Jan 14 23:44:48.322651 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 14 23:44:48.597394 sshd[1880]: Connection closed by 20.161.92.111 port 32898 Jan 14 23:44:48.597189 sshd-session[1877]: pam_unix(sshd:session): session closed for user core Jan 14 23:44:48.601261 systemd[1]: sshd@3-10.0.22.230:22-20.161.92.111:32898.service: Deactivated successfully. Jan 14 23:44:48.604648 systemd[1]: session-4.scope: Deactivated successfully. Jan 14 23:44:48.605338 systemd-logind[1670]: Session 4 logged out. Waiting for processes to exit. Jan 14 23:44:48.606234 systemd-logind[1670]: Removed session 4. Jan 14 23:44:48.711402 systemd[1]: Started sshd@4-10.0.22.230:22-20.161.92.111:32904.service - OpenSSH per-connection server daemon (20.161.92.111:32904). Jan 14 23:44:49.262327 sshd[1886]: Accepted publickey for core from 20.161.92.111 port 32904 ssh2: RSA SHA256:2pPTL0V6h0nrRdf8E8LR7uYjIY+dfolij8SaSnrdjVo Jan 14 23:44:49.263089 sshd-session[1886]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:44:49.268097 systemd-logind[1670]: New session 5 of user core. Jan 14 23:44:49.279416 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 14 23:44:49.561317 sshd[1889]: Connection closed by 20.161.92.111 port 32904 Jan 14 23:44:49.561520 sshd-session[1886]: pam_unix(sshd:session): session closed for user core Jan 14 23:44:49.565138 systemd-logind[1670]: Session 5 logged out. Waiting for processes to exit. Jan 14 23:44:49.565325 systemd[1]: sshd@4-10.0.22.230:22-20.161.92.111:32904.service: Deactivated successfully. Jan 14 23:44:49.566841 systemd[1]: session-5.scope: Deactivated successfully. Jan 14 23:44:49.568223 systemd-logind[1670]: Removed session 5. Jan 14 23:44:49.671441 systemd[1]: Started sshd@5-10.0.22.230:22-20.161.92.111:32918.service - OpenSSH per-connection server daemon (20.161.92.111:32918). Jan 14 23:44:50.190322 sshd[1895]: Accepted publickey for core from 20.161.92.111 port 32918 ssh2: RSA SHA256:2pPTL0V6h0nrRdf8E8LR7uYjIY+dfolij8SaSnrdjVo Jan 14 23:44:50.191424 sshd-session[1895]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:44:50.195881 systemd-logind[1670]: New session 6 of user core. Jan 14 23:44:50.211654 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 14 23:44:50.483910 sshd[1898]: Connection closed by 20.161.92.111 port 32918 Jan 14 23:44:50.484215 sshd-session[1895]: pam_unix(sshd:session): session closed for user core Jan 14 23:44:50.487795 systemd[1]: sshd@5-10.0.22.230:22-20.161.92.111:32918.service: Deactivated successfully. Jan 14 23:44:50.489278 systemd[1]: session-6.scope: Deactivated successfully. Jan 14 23:44:50.489979 systemd-logind[1670]: Session 6 logged out. Waiting for processes to exit. Jan 14 23:44:50.490921 systemd-logind[1670]: Removed session 6. Jan 14 23:44:50.594635 systemd[1]: Started sshd@6-10.0.22.230:22-20.161.92.111:32932.service - OpenSSH per-connection server daemon (20.161.92.111:32932). Jan 14 23:44:51.139036 sshd[1904]: Accepted publickey for core from 20.161.92.111 port 32932 ssh2: RSA SHA256:2pPTL0V6h0nrRdf8E8LR7uYjIY+dfolij8SaSnrdjVo Jan 14 23:44:51.140217 sshd-session[1904]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:44:51.144030 systemd-logind[1670]: New session 7 of user core. Jan 14 23:44:51.155527 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 14 23:44:51.349845 sudo[1908]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 14 23:44:51.350093 sudo[1908]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 23:44:51.364727 sudo[1908]: pam_unix(sudo:session): session closed for user root Jan 14 23:44:51.461595 sshd[1907]: Connection closed by 20.161.92.111 port 32932 Jan 14 23:44:51.462225 sshd-session[1904]: pam_unix(sshd:session): session closed for user core Jan 14 23:44:51.466287 systemd[1]: sshd@6-10.0.22.230:22-20.161.92.111:32932.service: Deactivated successfully. Jan 14 23:44:51.467976 systemd[1]: session-7.scope: Deactivated successfully. Jan 14 23:44:51.470328 systemd-logind[1670]: Session 7 logged out. Waiting for processes to exit. Jan 14 23:44:51.471679 systemd-logind[1670]: Removed session 7. Jan 14 23:44:51.583009 systemd[1]: Started sshd@7-10.0.22.230:22-20.161.92.111:32944.service - OpenSSH per-connection server daemon (20.161.92.111:32944). Jan 14 23:44:52.121315 sshd[1914]: Accepted publickey for core from 20.161.92.111 port 32944 ssh2: RSA SHA256:2pPTL0V6h0nrRdf8E8LR7uYjIY+dfolij8SaSnrdjVo Jan 14 23:44:52.122088 sshd-session[1914]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:44:52.126830 systemd-logind[1670]: New session 8 of user core. Jan 14 23:44:52.142656 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 14 23:44:52.325127 sudo[1919]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 14 23:44:52.325404 sudo[1919]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 23:44:52.329889 sudo[1919]: pam_unix(sudo:session): session closed for user root Jan 14 23:44:52.335595 sudo[1918]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 14 23:44:52.336109 sudo[1918]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 23:44:52.344704 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 14 23:44:52.390000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 23:44:52.391366 kernel: kauditd_printk_skb: 28 callbacks suppressed Jan 14 23:44:52.391409 kernel: audit: type=1305 audit(1768434292.390:230): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 14 23:44:52.391535 augenrules[1941]: No rules Jan 14 23:44:52.390000 audit[1941]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd9b69230 a2=420 a3=0 items=0 ppid=1922 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:52.395954 kernel: audit: type=1300 audit(1768434292.390:230): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd9b69230 a2=420 a3=0 items=0 ppid=1922 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:52.396342 kernel: audit: type=1327 audit(1768434292.390:230): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 23:44:52.390000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 14 23:44:52.397139 systemd[1]: audit-rules.service: Deactivated successfully. Jan 14 23:44:52.397478 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 14 23:44:52.397000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:52.399927 kernel: audit: type=1130 audit(1768434292.397:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:52.400017 kernel: audit: type=1131 audit(1768434292.397:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:52.397000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:52.400472 sudo[1918]: pam_unix(sudo:session): session closed for user root Jan 14 23:44:52.399000 audit[1918]: USER_END pid=1918 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:44:52.404737 kernel: audit: type=1106 audit(1768434292.399:233): pid=1918 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:44:52.404838 kernel: audit: type=1104 audit(1768434292.399:234): pid=1918 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:44:52.399000 audit[1918]: CRED_DISP pid=1918 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:44:52.500527 sshd[1917]: Connection closed by 20.161.92.111 port 32944 Jan 14 23:44:52.500929 sshd-session[1914]: pam_unix(sshd:session): session closed for user core Jan 14 23:44:52.502000 audit[1914]: USER_END pid=1914 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:44:52.505606 systemd[1]: sshd@7-10.0.22.230:22-20.161.92.111:32944.service: Deactivated successfully. Jan 14 23:44:52.502000 audit[1914]: CRED_DISP pid=1914 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:44:52.507121 systemd[1]: session-8.scope: Deactivated successfully. Jan 14 23:44:52.508231 systemd-logind[1670]: Session 8 logged out. Waiting for processes to exit. Jan 14 23:44:52.509207 systemd-logind[1670]: Removed session 8. Jan 14 23:44:52.509598 kernel: audit: type=1106 audit(1768434292.502:235): pid=1914 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:44:52.509631 kernel: audit: type=1104 audit(1768434292.502:236): pid=1914 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:44:52.509645 kernel: audit: type=1131 audit(1768434292.504:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.22.230:22-20.161.92.111:32944 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:52.504000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.22.230:22-20.161.92.111:32944 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:52.618000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.22.230:22-20.161.92.111:32960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:52.618537 systemd[1]: Started sshd@8-10.0.22.230:22-20.161.92.111:32960.service - OpenSSH per-connection server daemon (20.161.92.111:32960). Jan 14 23:44:53.155000 audit[1950]: USER_ACCT pid=1950 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:44:53.155952 sshd[1950]: Accepted publickey for core from 20.161.92.111 port 32960 ssh2: RSA SHA256:2pPTL0V6h0nrRdf8E8LR7uYjIY+dfolij8SaSnrdjVo Jan 14 23:44:53.157048 sshd-session[1950]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 14 23:44:53.156000 audit[1950]: CRED_ACQ pid=1950 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:44:53.156000 audit[1950]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdc93dd80 a2=3 a3=0 items=0 ppid=1 pid=1950 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:53.156000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 14 23:44:53.160971 systemd-logind[1670]: New session 9 of user core. Jan 14 23:44:53.177436 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 14 23:44:53.179000 audit[1950]: USER_START pid=1950 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:44:53.180000 audit[1953]: CRED_ACQ pid=1953 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:44:53.354144 sudo[1954]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 14 23:44:53.353000 audit[1954]: USER_ACCT pid=1954 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock,pam_permit acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:44:53.353000 audit[1954]: CRED_REFR pid=1954 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:44:53.354429 sudo[1954]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 14 23:44:53.355000 audit[1954]: USER_START pid=1954 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:44:53.682825 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 14 23:44:53.697521 (dockerd)[1975]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 14 23:44:53.936054 dockerd[1975]: time="2026-01-14T23:44:53.935717800Z" level=info msg="Starting up" Jan 14 23:44:53.936675 dockerd[1975]: time="2026-01-14T23:44:53.936651200Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 14 23:44:53.946915 dockerd[1975]: time="2026-01-14T23:44:53.946865320Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 14 23:44:53.981194 dockerd[1975]: time="2026-01-14T23:44:53.981135120Z" level=info msg="Loading containers: start." Jan 14 23:44:53.991286 kernel: Initializing XFRM netlink socket Jan 14 23:44:54.048000 audit[2028]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:54.048000 audit[2028]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffffbd8b760 a2=0 a3=0 items=0 ppid=1975 pid=2028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.048000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 23:44:54.050000 audit[2030]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2030 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:54.050000 audit[2030]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe37be480 a2=0 a3=0 items=0 ppid=1975 pid=2030 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.050000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 23:44:54.052000 audit[2032]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:54.052000 audit[2032]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffea978100 a2=0 a3=0 items=0 ppid=1975 pid=2032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.052000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 23:44:54.054000 audit[2034]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:54.054000 audit[2034]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc565a7f0 a2=0 a3=0 items=0 ppid=1975 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.054000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 23:44:54.056000 audit[2036]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:54.056000 audit[2036]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc4026340 a2=0 a3=0 items=0 ppid=1975 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.056000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 23:44:54.057000 audit[2038]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:54.057000 audit[2038]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc3de8170 a2=0 a3=0 items=0 ppid=1975 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.057000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 23:44:54.059000 audit[2040]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:54.059000 audit[2040]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe646c070 a2=0 a3=0 items=0 ppid=1975 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.059000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 23:44:54.061000 audit[2042]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:54.061000 audit[2042]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffec30b7e0 a2=0 a3=0 items=0 ppid=1975 pid=2042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.061000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 23:44:54.093000 audit[2045]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2045 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:54.093000 audit[2045]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffd37d70a0 a2=0 a3=0 items=0 ppid=1975 pid=2045 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.093000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 14 23:44:54.095000 audit[2047]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2047 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:54.095000 audit[2047]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffddc1bf10 a2=0 a3=0 items=0 ppid=1975 pid=2047 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.095000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 23:44:54.096000 audit[2049]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2049 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:54.096000 audit[2049]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffe6e73d90 a2=0 a3=0 items=0 ppid=1975 pid=2049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.096000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 23:44:54.098000 audit[2051]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2051 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:54.098000 audit[2051]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffe1647b90 a2=0 a3=0 items=0 ppid=1975 pid=2051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.098000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 23:44:54.100000 audit[2053]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:54.100000 audit[2053]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=fffff9300770 a2=0 a3=0 items=0 ppid=1975 pid=2053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.100000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 23:44:54.139000 audit[2083]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2083 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:54.139000 audit[2083]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffff1a591c0 a2=0 a3=0 items=0 ppid=1975 pid=2083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.139000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 14 23:44:54.140000 audit[2085]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2085 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:54.140000 audit[2085]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffff728b210 a2=0 a3=0 items=0 ppid=1975 pid=2085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.140000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 14 23:44:54.142000 audit[2087]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:54.142000 audit[2087]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd0bba550 a2=0 a3=0 items=0 ppid=1975 pid=2087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.142000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 14 23:44:54.144000 audit[2089]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:54.144000 audit[2089]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcd33e260 a2=0 a3=0 items=0 ppid=1975 pid=2089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.144000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 14 23:44:54.146000 audit[2091]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2091 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:54.146000 audit[2091]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe3e50f50 a2=0 a3=0 items=0 ppid=1975 pid=2091 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.146000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 14 23:44:54.147000 audit[2093]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:54.147000 audit[2093]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffd1db8b90 a2=0 a3=0 items=0 ppid=1975 pid=2093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.147000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 23:44:54.149000 audit[2095]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:54.149000 audit[2095]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe0d56040 a2=0 a3=0 items=0 ppid=1975 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.149000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 23:44:54.151000 audit[2097]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:54.151000 audit[2097]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffea647dc0 a2=0 a3=0 items=0 ppid=1975 pid=2097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.151000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 14 23:44:54.152000 audit[2099]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2099 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:54.152000 audit[2099]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffd973cb60 a2=0 a3=0 items=0 ppid=1975 pid=2099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.152000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 14 23:44:54.154000 audit[2101]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2101 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:54.154000 audit[2101]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffcf2a9e00 a2=0 a3=0 items=0 ppid=1975 pid=2101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.154000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 14 23:44:54.156000 audit[2103]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2103 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:54.156000 audit[2103]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffe141c570 a2=0 a3=0 items=0 ppid=1975 pid=2103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.156000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 14 23:44:54.157000 audit[2105]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:54.157000 audit[2105]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffcf1a1060 a2=0 a3=0 items=0 ppid=1975 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.157000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 14 23:44:54.159000 audit[2107]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2107 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:54.159000 audit[2107]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=fffffc314080 a2=0 a3=0 items=0 ppid=1975 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.159000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 14 23:44:54.164000 audit[2112]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2112 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:54.164000 audit[2112]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd1d4be50 a2=0 a3=0 items=0 ppid=1975 pid=2112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.164000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 23:44:54.166000 audit[2114]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2114 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:54.166000 audit[2114]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffee92a7f0 a2=0 a3=0 items=0 ppid=1975 pid=2114 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.166000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 23:44:54.167000 audit[2116]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2116 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:54.167000 audit[2116]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffc99955a0 a2=0 a3=0 items=0 ppid=1975 pid=2116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.167000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 23:44:54.169000 audit[2118]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2118 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:54.169000 audit[2118]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffccb04840 a2=0 a3=0 items=0 ppid=1975 pid=2118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.169000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 14 23:44:54.171000 audit[2120]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2120 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:54.171000 audit[2120]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffc566e740 a2=0 a3=0 items=0 ppid=1975 pid=2120 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.171000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 14 23:44:54.172000 audit[2122]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2122 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:44:54.172000 audit[2122]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd4b661b0 a2=0 a3=0 items=0 ppid=1975 pid=2122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.172000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 14 23:44:54.196000 audit[2127]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2127 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:54.196000 audit[2127]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffdbd3dbc0 a2=0 a3=0 items=0 ppid=1975 pid=2127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.196000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 14 23:44:54.198000 audit[2129]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2129 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:54.198000 audit[2129]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=fffff626d560 a2=0 a3=0 items=0 ppid=1975 pid=2129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.198000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 14 23:44:54.206000 audit[2137]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2137 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:54.206000 audit[2137]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffc08830a0 a2=0 a3=0 items=0 ppid=1975 pid=2137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.206000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 14 23:44:54.223000 audit[2143]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2143 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:54.223000 audit[2143]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffee39f820 a2=0 a3=0 items=0 ppid=1975 pid=2143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.223000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 14 23:44:54.225000 audit[2145]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2145 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:54.225000 audit[2145]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=fffff41724c0 a2=0 a3=0 items=0 ppid=1975 pid=2145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.225000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 14 23:44:54.227000 audit[2147]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2147 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:54.227000 audit[2147]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffffab43070 a2=0 a3=0 items=0 ppid=1975 pid=2147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.227000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 14 23:44:54.229000 audit[2149]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2149 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:54.229000 audit[2149]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffedd626d0 a2=0 a3=0 items=0 ppid=1975 pid=2149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.229000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 14 23:44:54.231000 audit[2151]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2151 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:44:54.231000 audit[2151]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=fffffc7f8900 a2=0 a3=0 items=0 ppid=1975 pid=2151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:44:54.231000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 14 23:44:54.231972 systemd-networkd[1602]: docker0: Link UP Jan 14 23:44:54.252376 dockerd[1975]: time="2026-01-14T23:44:54.252172160Z" level=info msg="Loading containers: done." Jan 14 23:44:54.263874 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3629491401-merged.mount: Deactivated successfully. Jan 14 23:44:54.272754 dockerd[1975]: time="2026-01-14T23:44:54.272333320Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 14 23:44:54.272754 dockerd[1975]: time="2026-01-14T23:44:54.272412320Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 14 23:44:54.272754 dockerd[1975]: time="2026-01-14T23:44:54.272580120Z" level=info msg="Initializing buildkit" Jan 14 23:44:54.292870 dockerd[1975]: time="2026-01-14T23:44:54.292838200Z" level=info msg="Completed buildkit initialization" Jan 14 23:44:54.299207 dockerd[1975]: time="2026-01-14T23:44:54.299168760Z" level=info msg="Daemon has completed initialization" Jan 14 23:44:54.299519 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 14 23:44:54.299639 dockerd[1975]: time="2026-01-14T23:44:54.299418280Z" level=info msg="API listen on /run/docker.sock" Jan 14 23:44:54.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:55.325040 containerd[1695]: time="2026-01-14T23:44:55.325002200Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 14 23:44:56.055479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3749030008.mount: Deactivated successfully. Jan 14 23:44:56.503816 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 14 23:44:56.505160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:44:56.656959 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:44:56.656000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:44:56.660748 (kubelet)[2253]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 23:44:56.699427 kubelet[2253]: E0114 23:44:56.699360 2253 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 23:44:56.702252 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 23:44:56.702389 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 23:44:56.702000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 23:44:56.702894 systemd[1]: kubelet.service: Consumed 138ms CPU time, 105.9M memory peak. Jan 14 23:44:56.960849 containerd[1695]: time="2026-01-14T23:44:56.960734480Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:56.962552 containerd[1695]: time="2026-01-14T23:44:56.962170640Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=24845792" Jan 14 23:44:56.963519 containerd[1695]: time="2026-01-14T23:44:56.963470880Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:56.966328 containerd[1695]: time="2026-01-14T23:44:56.966300560Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:56.968000 containerd[1695]: time="2026-01-14T23:44:56.967901040Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 1.64286016s" Jan 14 23:44:56.968000 containerd[1695]: time="2026-01-14T23:44:56.967937800Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 14 23:44:56.968656 containerd[1695]: time="2026-01-14T23:44:56.968561360Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 14 23:44:58.148335 chronyd[1652]: Selected source PHC0 Jan 14 23:44:58.384751 containerd[1695]: time="2026-01-14T23:44:58.384699368Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:58.385664 containerd[1695]: time="2026-01-14T23:44:58.385627991Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22613932" Jan 14 23:44:58.386599 containerd[1695]: time="2026-01-14T23:44:58.386562187Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:58.389977 containerd[1695]: time="2026-01-14T23:44:58.389930691Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:58.391478 containerd[1695]: time="2026-01-14T23:44:58.391451665Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.422852765s" Jan 14 23:44:58.391521 containerd[1695]: time="2026-01-14T23:44:58.391478212Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 14 23:44:58.391986 containerd[1695]: time="2026-01-14T23:44:58.391958254Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 14 23:44:59.553564 containerd[1695]: time="2026-01-14T23:44:59.553489275Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:59.554457 containerd[1695]: time="2026-01-14T23:44:59.554419915Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17608611" Jan 14 23:44:59.555537 containerd[1695]: time="2026-01-14T23:44:59.555478741Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:59.557539 containerd[1695]: time="2026-01-14T23:44:59.557497101Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:44:59.558919 containerd[1695]: time="2026-01-14T23:44:59.558890475Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.166906444s" Jan 14 23:44:59.558960 containerd[1695]: time="2026-01-14T23:44:59.558918745Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 14 23:44:59.559371 containerd[1695]: time="2026-01-14T23:44:59.559347490Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 14 23:45:00.524146 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1381577242.mount: Deactivated successfully. Jan 14 23:45:00.737765 containerd[1695]: time="2026-01-14T23:45:00.737694153Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:45:00.738816 containerd[1695]: time="2026-01-14T23:45:00.738766619Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=17713718" Jan 14 23:45:00.739735 containerd[1695]: time="2026-01-14T23:45:00.739684132Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:45:00.741676 containerd[1695]: time="2026-01-14T23:45:00.741628278Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:45:00.742201 containerd[1695]: time="2026-01-14T23:45:00.742178243Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.182798779s" Jan 14 23:45:00.742248 containerd[1695]: time="2026-01-14T23:45:00.742207430Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 14 23:45:00.742681 containerd[1695]: time="2026-01-14T23:45:00.742650218Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 14 23:45:01.404483 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount702864318.mount: Deactivated successfully. Jan 14 23:45:01.866731 containerd[1695]: time="2026-01-14T23:45:01.866663608Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:45:01.868856 containerd[1695]: time="2026-01-14T23:45:01.868780161Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=15958731" Jan 14 23:45:01.869625 containerd[1695]: time="2026-01-14T23:45:01.869582394Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:45:01.872806 containerd[1695]: time="2026-01-14T23:45:01.872762210Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:45:01.874410 containerd[1695]: time="2026-01-14T23:45:01.874384092Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.131703001s" Jan 14 23:45:01.874453 containerd[1695]: time="2026-01-14T23:45:01.874413499Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 14 23:45:01.874895 containerd[1695]: time="2026-01-14T23:45:01.874861310Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 14 23:45:02.313081 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount610512208.mount: Deactivated successfully. Jan 14 23:45:02.319222 containerd[1695]: time="2026-01-14T23:45:02.319147607Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 23:45:02.320049 containerd[1695]: time="2026-01-14T23:45:02.319988963Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 14 23:45:02.320846 containerd[1695]: time="2026-01-14T23:45:02.320807072Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 23:45:02.323463 containerd[1695]: time="2026-01-14T23:45:02.323435604Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 14 23:45:02.324840 containerd[1695]: time="2026-01-14T23:45:02.323950992Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 449.052208ms" Jan 14 23:45:02.324840 containerd[1695]: time="2026-01-14T23:45:02.323979555Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 14 23:45:02.324840 containerd[1695]: time="2026-01-14T23:45:02.324353226Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 14 23:45:03.044932 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount141414863.mount: Deactivated successfully. Jan 14 23:45:04.951934 containerd[1695]: time="2026-01-14T23:45:04.951825373Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:45:04.953707 containerd[1695]: time="2026-01-14T23:45:04.953359178Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=66060366" Jan 14 23:45:04.954826 containerd[1695]: time="2026-01-14T23:45:04.954787222Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:45:04.957685 containerd[1695]: time="2026-01-14T23:45:04.957636751Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:45:04.959367 containerd[1695]: time="2026-01-14T23:45:04.959337396Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.634961053s" Jan 14 23:45:04.959432 containerd[1695]: time="2026-01-14T23:45:04.959369476Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 14 23:45:06.754096 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 14 23:45:06.755538 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:45:06.900480 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:45:06.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:45:06.901536 kernel: kauditd_printk_skb: 134 callbacks suppressed Jan 14 23:45:06.901582 kernel: audit: type=1130 audit(1768434306.900:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:45:06.929011 (kubelet)[2422]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 14 23:45:06.970780 kubelet[2422]: E0114 23:45:06.970727 2422 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 14 23:45:06.973116 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 14 23:45:06.973248 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 14 23:45:06.974831 systemd[1]: kubelet.service: Consumed 139ms CPU time, 107.4M memory peak. Jan 14 23:45:06.974000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 23:45:06.978288 kernel: audit: type=1131 audit(1768434306.974:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 23:45:11.678448 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:45:11.678000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:45:11.678609 systemd[1]: kubelet.service: Consumed 139ms CPU time, 107.4M memory peak. Jan 14 23:45:11.678000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:45:11.682024 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:45:11.683789 kernel: audit: type=1130 audit(1768434311.678:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:45:11.683836 kernel: audit: type=1131 audit(1768434311.678:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:45:11.706173 systemd[1]: Reload requested from client PID 2437 ('systemctl') (unit session-9.scope)... Jan 14 23:45:11.706194 systemd[1]: Reloading... Jan 14 23:45:11.773621 zram_generator::config[2481]: No configuration found. Jan 14 23:45:11.953995 systemd[1]: Reloading finished in 247 ms. Jan 14 23:45:11.983000 audit: BPF prog-id=63 op=LOAD Jan 14 23:45:11.985358 kernel: audit: type=1334 audit(1768434311.983:294): prog-id=63 op=LOAD Jan 14 23:45:11.984000 audit: BPF prog-id=55 op=UNLOAD Jan 14 23:45:11.985000 audit: BPF prog-id=64 op=LOAD Jan 14 23:45:11.987974 kernel: audit: type=1334 audit(1768434311.984:295): prog-id=55 op=UNLOAD Jan 14 23:45:11.988022 kernel: audit: type=1334 audit(1768434311.985:296): prog-id=64 op=LOAD Jan 14 23:45:11.985000 audit: BPF prog-id=49 op=UNLOAD Jan 14 23:45:11.989075 kernel: audit: type=1334 audit(1768434311.985:297): prog-id=49 op=UNLOAD Jan 14 23:45:11.989130 kernel: audit: type=1334 audit(1768434311.985:298): prog-id=65 op=LOAD Jan 14 23:45:11.985000 audit: BPF prog-id=65 op=LOAD Jan 14 23:45:11.989884 kernel: audit: type=1334 audit(1768434311.985:299): prog-id=66 op=LOAD Jan 14 23:45:11.985000 audit: BPF prog-id=66 op=LOAD Jan 14 23:45:11.990694 kernel: audit: type=1334 audit(1768434311.985:300): prog-id=50 op=UNLOAD Jan 14 23:45:11.985000 audit: BPF prog-id=50 op=UNLOAD Jan 14 23:45:11.985000 audit: BPF prog-id=51 op=UNLOAD Jan 14 23:45:11.992342 kernel: audit: type=1334 audit(1768434311.985:301): prog-id=51 op=UNLOAD Jan 14 23:45:11.995109 kernel: audit: type=1334 audit(1768434311.985:302): prog-id=67 op=LOAD Jan 14 23:45:11.995172 kernel: audit: type=1334 audit(1768434311.985:303): prog-id=46 op=UNLOAD Jan 14 23:45:11.985000 audit: BPF prog-id=67 op=LOAD Jan 14 23:45:11.985000 audit: BPF prog-id=46 op=UNLOAD Jan 14 23:45:11.987000 audit: BPF prog-id=68 op=LOAD Jan 14 23:45:11.992000 audit: BPF prog-id=69 op=LOAD Jan 14 23:45:11.992000 audit: BPF prog-id=47 op=UNLOAD Jan 14 23:45:11.992000 audit: BPF prog-id=48 op=UNLOAD Jan 14 23:45:11.993000 audit: BPF prog-id=70 op=LOAD Jan 14 23:45:11.994000 audit: BPF prog-id=43 op=UNLOAD Jan 14 23:45:11.994000 audit: BPF prog-id=71 op=LOAD Jan 14 23:45:11.994000 audit: BPF prog-id=72 op=LOAD Jan 14 23:45:11.994000 audit: BPF prog-id=44 op=UNLOAD Jan 14 23:45:11.994000 audit: BPF prog-id=45 op=UNLOAD Jan 14 23:45:11.995000 audit: BPF prog-id=73 op=LOAD Jan 14 23:45:11.995000 audit: BPF prog-id=74 op=LOAD Jan 14 23:45:11.995000 audit: BPF prog-id=56 op=UNLOAD Jan 14 23:45:11.995000 audit: BPF prog-id=57 op=UNLOAD Jan 14 23:45:11.996000 audit: BPF prog-id=75 op=LOAD Jan 14 23:45:11.996000 audit: BPF prog-id=60 op=UNLOAD Jan 14 23:45:11.996000 audit: BPF prog-id=76 op=LOAD Jan 14 23:45:11.996000 audit: BPF prog-id=77 op=LOAD Jan 14 23:45:11.996000 audit: BPF prog-id=61 op=UNLOAD Jan 14 23:45:11.996000 audit: BPF prog-id=62 op=UNLOAD Jan 14 23:45:11.997000 audit: BPF prog-id=78 op=LOAD Jan 14 23:45:11.997000 audit: BPF prog-id=58 op=UNLOAD Jan 14 23:45:11.997000 audit: BPF prog-id=79 op=LOAD Jan 14 23:45:11.997000 audit: BPF prog-id=59 op=UNLOAD Jan 14 23:45:11.999000 audit: BPF prog-id=80 op=LOAD Jan 14 23:45:11.999000 audit: BPF prog-id=52 op=UNLOAD Jan 14 23:45:11.999000 audit: BPF prog-id=81 op=LOAD Jan 14 23:45:11.999000 audit: BPF prog-id=82 op=LOAD Jan 14 23:45:11.999000 audit: BPF prog-id=53 op=UNLOAD Jan 14 23:45:11.999000 audit: BPF prog-id=54 op=UNLOAD Jan 14 23:45:12.028205 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 14 23:45:12.028444 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 14 23:45:12.028803 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:45:12.028000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 14 23:45:12.028865 systemd[1]: kubelet.service: Consumed 90ms CPU time, 94.9M memory peak. Jan 14 23:45:12.030436 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:45:12.145622 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:45:12.145000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:45:12.149212 (kubelet)[2531]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 23:45:12.179951 kubelet[2531]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 23:45:12.179951 kubelet[2531]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 23:45:12.179951 kubelet[2531]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 23:45:12.180261 kubelet[2531]: I0114 23:45:12.180003 2531 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 23:45:13.497383 kubelet[2531]: I0114 23:45:13.497333 2531 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 14 23:45:13.497383 kubelet[2531]: I0114 23:45:13.497364 2531 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 23:45:13.497786 kubelet[2531]: I0114 23:45:13.497623 2531 server.go:954] "Client rotation is on, will bootstrap in background" Jan 14 23:45:13.524054 kubelet[2531]: E0114 23:45:13.524011 2531 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://10.0.22.230:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.22.230:6443: connect: connection refused" logger="UnhandledError" Jan 14 23:45:13.526288 kubelet[2531]: I0114 23:45:13.526225 2531 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 23:45:13.538069 kubelet[2531]: I0114 23:45:13.538046 2531 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 23:45:13.541619 kubelet[2531]: I0114 23:45:13.541586 2531 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 23:45:13.543382 kubelet[2531]: I0114 23:45:13.543312 2531 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 23:45:13.543541 kubelet[2531]: I0114 23:45:13.543372 2531 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515-1-0-n-1d3be4f164","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 23:45:13.543652 kubelet[2531]: I0114 23:45:13.543633 2531 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 23:45:13.543652 kubelet[2531]: I0114 23:45:13.543642 2531 container_manager_linux.go:304] "Creating device plugin manager" Jan 14 23:45:13.543932 kubelet[2531]: I0114 23:45:13.543854 2531 state_mem.go:36] "Initialized new in-memory state store" Jan 14 23:45:13.547416 kubelet[2531]: I0114 23:45:13.547378 2531 kubelet.go:446] "Attempting to sync node with API server" Jan 14 23:45:13.547416 kubelet[2531]: I0114 23:45:13.547411 2531 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 23:45:13.547593 kubelet[2531]: I0114 23:45:13.547436 2531 kubelet.go:352] "Adding apiserver pod source" Jan 14 23:45:13.547593 kubelet[2531]: I0114 23:45:13.547446 2531 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 23:45:13.551180 kubelet[2531]: I0114 23:45:13.551148 2531 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 23:45:13.552014 kubelet[2531]: I0114 23:45:13.551984 2531 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 23:45:13.552216 kubelet[2531]: W0114 23:45:13.552192 2531 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 14 23:45:13.552314 kubelet[2531]: W0114 23:45:13.552252 2531 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://10.0.22.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515-1-0-n-1d3be4f164&limit=500&resourceVersion=0": dial tcp 10.0.22.230:6443: connect: connection refused Jan 14 23:45:13.552363 kubelet[2531]: E0114 23:45:13.552345 2531 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://10.0.22.230:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4515-1-0-n-1d3be4f164&limit=500&resourceVersion=0\": dial tcp 10.0.22.230:6443: connect: connection refused" logger="UnhandledError" Jan 14 23:45:13.553129 kubelet[2531]: W0114 23:45:13.553023 2531 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://10.0.22.230:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 10.0.22.230:6443: connect: connection refused Jan 14 23:45:13.553129 kubelet[2531]: E0114 23:45:13.553080 2531 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://10.0.22.230:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.22.230:6443: connect: connection refused" logger="UnhandledError" Jan 14 23:45:13.553489 kubelet[2531]: I0114 23:45:13.553468 2531 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 23:45:13.553520 kubelet[2531]: I0114 23:45:13.553515 2531 server.go:1287] "Started kubelet" Jan 14 23:45:13.554259 kubelet[2531]: I0114 23:45:13.554213 2531 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 23:45:13.554550 kubelet[2531]: I0114 23:45:13.554527 2531 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 23:45:13.554616 kubelet[2531]: I0114 23:45:13.554596 2531 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 23:45:13.555430 kubelet[2531]: I0114 23:45:13.555410 2531 server.go:479] "Adding debug handlers to kubelet server" Jan 14 23:45:13.555622 kubelet[2531]: I0114 23:45:13.555596 2531 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 23:45:13.556325 kubelet[2531]: I0114 23:45:13.556299 2531 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 23:45:13.557735 kubelet[2531]: E0114 23:45:13.557370 2531 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.22.230:6443/api/v1/namespaces/default/events\": dial tcp 10.0.22.230:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4515-1-0-n-1d3be4f164.188abda37bf7aa3d default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4515-1-0-n-1d3be4f164,UID:ci-4515-1-0-n-1d3be4f164,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4515-1-0-n-1d3be4f164,},FirstTimestamp:2026-01-14 23:45:13.553488445 +0000 UTC m=+1.401517882,LastTimestamp:2026-01-14 23:45:13.553488445 +0000 UTC m=+1.401517882,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-n-1d3be4f164,}" Jan 14 23:45:13.557884 kubelet[2531]: E0114 23:45:13.557847 2531 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515-1-0-n-1d3be4f164\" not found" Jan 14 23:45:13.558536 kubelet[2531]: I0114 23:45:13.558506 2531 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 23:45:13.558653 kubelet[2531]: E0114 23:45:13.558626 2531 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" interval="200ms" Jan 14 23:45:13.558714 kubelet[2531]: I0114 23:45:13.558696 2531 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 23:45:13.558768 kubelet[2531]: I0114 23:45:13.558740 2531 reconciler.go:26] "Reconciler: start to sync state" Jan 14 23:45:13.559104 kubelet[2531]: W0114 23:45:13.559054 2531 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://10.0.22.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 10.0.22.230:6443: connect: connection refused Jan 14 23:45:13.559178 kubelet[2531]: E0114 23:45:13.559106 2531 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://10.0.22.230:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.22.230:6443: connect: connection refused" logger="UnhandledError" Jan 14 23:45:13.558000 audit[2545]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2545 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:45:13.558000 audit[2545]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffdc4a7d30 a2=0 a3=0 items=0 ppid=2531 pid=2545 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:13.558000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 23:45:13.560527 kubelet[2531]: E0114 23:45:13.560331 2531 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 23:45:13.560849 kubelet[2531]: I0114 23:45:13.560573 2531 factory.go:221] Registration of the containerd container factory successfully Jan 14 23:45:13.561022 kubelet[2531]: I0114 23:45:13.560934 2531 factory.go:221] Registration of the systemd container factory successfully Jan 14 23:45:13.560000 audit[2546]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2546 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:45:13.560000 audit[2546]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffeb65d7e0 a2=0 a3=0 items=0 ppid=2531 pid=2546 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:13.560000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 23:45:13.561587 kubelet[2531]: I0114 23:45:13.561559 2531 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 23:45:13.562000 audit[2548]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2548 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:45:13.562000 audit[2548]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffea516a70 a2=0 a3=0 items=0 ppid=2531 pid=2548 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:13.562000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 23:45:13.564000 audit[2550]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2550 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:45:13.564000 audit[2550]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffff0ca4b90 a2=0 a3=0 items=0 ppid=2531 pid=2550 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:13.564000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 23:45:13.575000 audit[2556]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2556 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:45:13.575000 audit[2556]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffe0c06c80 a2=0 a3=0 items=0 ppid=2531 pid=2556 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:13.575000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 14 23:45:13.577070 kubelet[2531]: I0114 23:45:13.577027 2531 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 23:45:13.577000 audit[2558]: NETFILTER_CFG table=mangle:47 family=2 entries=1 op=nft_register_chain pid=2558 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:45:13.577000 audit[2558]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd1355b60 a2=0 a3=0 items=0 ppid=2531 pid=2558 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:13.577000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 23:45:13.577000 audit[2557]: NETFILTER_CFG table=mangle:48 family=10 entries=2 op=nft_register_chain pid=2557 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:45:13.577000 audit[2557]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffdaeac700 a2=0 a3=0 items=0 ppid=2531 pid=2557 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:13.577000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 14 23:45:13.578416 kubelet[2531]: I0114 23:45:13.577482 2531 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 23:45:13.578416 kubelet[2531]: I0114 23:45:13.577503 2531 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 23:45:13.578416 kubelet[2531]: I0114 23:45:13.577523 2531 state_mem.go:36] "Initialized new in-memory state store" Jan 14 23:45:13.578569 kubelet[2531]: I0114 23:45:13.578542 2531 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 23:45:13.578606 kubelet[2531]: I0114 23:45:13.578579 2531 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 14 23:45:13.578606 kubelet[2531]: I0114 23:45:13.578599 2531 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 23:45:13.578606 kubelet[2531]: I0114 23:45:13.578606 2531 kubelet.go:2382] "Starting kubelet main sync loop" Jan 14 23:45:13.578661 kubelet[2531]: E0114 23:45:13.578646 2531 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 23:45:13.578000 audit[2559]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2559 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:45:13.578000 audit[2559]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffce288c00 a2=0 a3=0 items=0 ppid=2531 pid=2559 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:13.578000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 23:45:13.579405 kubelet[2531]: W0114 23:45:13.579143 2531 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://10.0.22.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 10.0.22.230:6443: connect: connection refused Jan 14 23:45:13.579405 kubelet[2531]: E0114 23:45:13.579188 2531 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://10.0.22.230:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.22.230:6443: connect: connection refused" logger="UnhandledError" Jan 14 23:45:13.579966 kubelet[2531]: I0114 23:45:13.579925 2531 policy_none.go:49] "None policy: Start" Jan 14 23:45:13.579966 kubelet[2531]: I0114 23:45:13.579951 2531 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 23:45:13.579966 kubelet[2531]: I0114 23:45:13.579964 2531 state_mem.go:35] "Initializing new in-memory state store" Jan 14 23:45:13.579000 audit[2561]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2561 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:45:13.579000 audit[2561]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd4dafda0 a2=0 a3=0 items=0 ppid=2531 pid=2561 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:13.579000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 23:45:13.580000 audit[2560]: NETFILTER_CFG table=mangle:51 family=10 entries=1 op=nft_register_chain pid=2560 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:45:13.580000 audit[2560]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc8e6fa90 a2=0 a3=0 items=0 ppid=2531 pid=2560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:13.580000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 14 23:45:13.581000 audit[2562]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2562 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:45:13.581000 audit[2562]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd4378740 a2=0 a3=0 items=0 ppid=2531 pid=2562 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:13.581000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 14 23:45:13.583000 audit[2563]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2563 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:45:13.583000 audit[2563]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc579bf90 a2=0 a3=0 items=0 ppid=2531 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:13.583000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 14 23:45:13.585367 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 14 23:45:13.598425 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 14 23:45:13.619874 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 14 23:45:13.621607 kubelet[2531]: I0114 23:45:13.621242 2531 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 23:45:13.621607 kubelet[2531]: I0114 23:45:13.621480 2531 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 23:45:13.621607 kubelet[2531]: I0114 23:45:13.621492 2531 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 23:45:13.622393 kubelet[2531]: I0114 23:45:13.621746 2531 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 23:45:13.623034 kubelet[2531]: E0114 23:45:13.623011 2531 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 23:45:13.623160 kubelet[2531]: E0114 23:45:13.623147 2531 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4515-1-0-n-1d3be4f164\" not found" Jan 14 23:45:13.688692 systemd[1]: Created slice kubepods-burstable-pod0b87770b8d26d1b1663c3229f1382cec.slice - libcontainer container kubepods-burstable-pod0b87770b8d26d1b1663c3229f1382cec.slice. Jan 14 23:45:13.711897 kubelet[2531]: E0114 23:45:13.711846 2531 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-n-1d3be4f164\" not found" node="ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:13.715675 systemd[1]: Created slice kubepods-burstable-pod2600c830ca674ed87b79b96ba000ed32.slice - libcontainer container kubepods-burstable-pod2600c830ca674ed87b79b96ba000ed32.slice. Jan 14 23:45:13.718225 kubelet[2531]: E0114 23:45:13.717629 2531 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-n-1d3be4f164\" not found" node="ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:13.720138 systemd[1]: Created slice kubepods-burstable-poda0fb221571f90a6b03ac373000837dfe.slice - libcontainer container kubepods-burstable-poda0fb221571f90a6b03ac373000837dfe.slice. Jan 14 23:45:13.721648 kubelet[2531]: E0114 23:45:13.721628 2531 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-n-1d3be4f164\" not found" node="ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:13.723933 kubelet[2531]: I0114 23:45:13.723912 2531 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:13.724385 kubelet[2531]: E0114 23:45:13.724361 2531 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.22.230:6443/api/v1/nodes\": dial tcp 10.0.22.230:6443: connect: connection refused" node="ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:13.760174 kubelet[2531]: E0114 23:45:13.760102 2531 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" interval="400ms" Jan 14 23:45:13.860514 kubelet[2531]: I0114 23:45:13.860442 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b87770b8d26d1b1663c3229f1382cec-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515-1-0-n-1d3be4f164\" (UID: \"0b87770b8d26d1b1663c3229f1382cec\") " pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:13.860699 kubelet[2531]: I0114 23:45:13.860499 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2600c830ca674ed87b79b96ba000ed32-ca-certs\") pod \"kube-controller-manager-ci-4515-1-0-n-1d3be4f164\" (UID: \"2600c830ca674ed87b79b96ba000ed32\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:13.860774 kubelet[2531]: I0114 23:45:13.860760 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2600c830ca674ed87b79b96ba000ed32-flexvolume-dir\") pod \"kube-controller-manager-ci-4515-1-0-n-1d3be4f164\" (UID: \"2600c830ca674ed87b79b96ba000ed32\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:13.860851 kubelet[2531]: I0114 23:45:13.860839 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a0fb221571f90a6b03ac373000837dfe-kubeconfig\") pod \"kube-scheduler-ci-4515-1-0-n-1d3be4f164\" (UID: \"a0fb221571f90a6b03ac373000837dfe\") " pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:13.861007 kubelet[2531]: I0114 23:45:13.860930 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b87770b8d26d1b1663c3229f1382cec-ca-certs\") pod \"kube-apiserver-ci-4515-1-0-n-1d3be4f164\" (UID: \"0b87770b8d26d1b1663c3229f1382cec\") " pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:13.861007 kubelet[2531]: I0114 23:45:13.860949 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b87770b8d26d1b1663c3229f1382cec-k8s-certs\") pod \"kube-apiserver-ci-4515-1-0-n-1d3be4f164\" (UID: \"0b87770b8d26d1b1663c3229f1382cec\") " pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:13.861140 kubelet[2531]: I0114 23:45:13.860965 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2600c830ca674ed87b79b96ba000ed32-k8s-certs\") pod \"kube-controller-manager-ci-4515-1-0-n-1d3be4f164\" (UID: \"2600c830ca674ed87b79b96ba000ed32\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:13.861140 kubelet[2531]: I0114 23:45:13.861103 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2600c830ca674ed87b79b96ba000ed32-kubeconfig\") pod \"kube-controller-manager-ci-4515-1-0-n-1d3be4f164\" (UID: \"2600c830ca674ed87b79b96ba000ed32\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:13.861140 kubelet[2531]: I0114 23:45:13.861123 2531 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2600c830ca674ed87b79b96ba000ed32-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515-1-0-n-1d3be4f164\" (UID: \"2600c830ca674ed87b79b96ba000ed32\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:13.926146 kubelet[2531]: I0114 23:45:13.926123 2531 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:13.926502 kubelet[2531]: E0114 23:45:13.926470 2531 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.22.230:6443/api/v1/nodes\": dial tcp 10.0.22.230:6443: connect: connection refused" node="ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:14.012902 containerd[1695]: time="2026-01-14T23:45:14.012772288Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515-1-0-n-1d3be4f164,Uid:0b87770b8d26d1b1663c3229f1382cec,Namespace:kube-system,Attempt:0,}" Jan 14 23:45:14.018438 containerd[1695]: time="2026-01-14T23:45:14.018303305Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515-1-0-n-1d3be4f164,Uid:2600c830ca674ed87b79b96ba000ed32,Namespace:kube-system,Attempt:0,}" Jan 14 23:45:14.023042 containerd[1695]: time="2026-01-14T23:45:14.022957079Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515-1-0-n-1d3be4f164,Uid:a0fb221571f90a6b03ac373000837dfe,Namespace:kube-system,Attempt:0,}" Jan 14 23:45:14.043339 containerd[1695]: time="2026-01-14T23:45:14.043296101Z" level=info msg="connecting to shim 73955e808bb581e4bdc7aa2d5959ccdb604f9ee318c1f32b7c04db31e6ab18b5" address="unix:///run/containerd/s/7e84ac729b0b3ea71e7a94c01c77c659ed3788ac652f51d62699bc5cf53d0528" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:45:14.057760 containerd[1695]: time="2026-01-14T23:45:14.057709185Z" level=info msg="connecting to shim eba0db63ffbcbbb606fbeaf52c4ff89d68c4a9d2019cbdbf216e3e64071abd95" address="unix:///run/containerd/s/4e8ea3ad27e8d5810075e10b08c0e8d908f7a88ab430f2b490a585ea504e0a17" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:45:14.066310 containerd[1695]: time="2026-01-14T23:45:14.066257971Z" level=info msg="connecting to shim e9f4fe1f8799ede63f0c2482ae40290f8475e1a9cf4647c08e39df2ce1e09de0" address="unix:///run/containerd/s/2ed5a4b04d5ba33a844034ea44ea6dc6a2bb7bc80a666fe573555a5ed2a8fae8" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:45:14.067492 systemd[1]: Started cri-containerd-73955e808bb581e4bdc7aa2d5959ccdb604f9ee318c1f32b7c04db31e6ab18b5.scope - libcontainer container 73955e808bb581e4bdc7aa2d5959ccdb604f9ee318c1f32b7c04db31e6ab18b5. Jan 14 23:45:14.088736 systemd[1]: Started cri-containerd-eba0db63ffbcbbb606fbeaf52c4ff89d68c4a9d2019cbdbf216e3e64071abd95.scope - libcontainer container eba0db63ffbcbbb606fbeaf52c4ff89d68c4a9d2019cbdbf216e3e64071abd95. Jan 14 23:45:14.090000 audit: BPF prog-id=83 op=LOAD Jan 14 23:45:14.091000 audit: BPF prog-id=84 op=LOAD Jan 14 23:45:14.091000 audit[2584]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2573 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733393535653830386262353831653462646337616132643539353963 Jan 14 23:45:14.091000 audit: BPF prog-id=84 op=UNLOAD Jan 14 23:45:14.091000 audit[2584]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733393535653830386262353831653462646337616132643539353963 Jan 14 23:45:14.091000 audit: BPF prog-id=85 op=LOAD Jan 14 23:45:14.091000 audit[2584]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2573 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733393535653830386262353831653462646337616132643539353963 Jan 14 23:45:14.091000 audit: BPF prog-id=86 op=LOAD Jan 14 23:45:14.091000 audit[2584]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2573 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.091000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733393535653830386262353831653462646337616132643539353963 Jan 14 23:45:14.092000 audit: BPF prog-id=86 op=UNLOAD Jan 14 23:45:14.092000 audit[2584]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.092000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733393535653830386262353831653462646337616132643539353963 Jan 14 23:45:14.092000 audit: BPF prog-id=85 op=UNLOAD Jan 14 23:45:14.092000 audit[2584]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.092000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733393535653830386262353831653462646337616132643539353963 Jan 14 23:45:14.092000 audit: BPF prog-id=87 op=LOAD Jan 14 23:45:14.092000 audit[2584]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2573 pid=2584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.092000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3733393535653830386262353831653462646337616132643539353963 Jan 14 23:45:14.093206 systemd[1]: Started cri-containerd-e9f4fe1f8799ede63f0c2482ae40290f8475e1a9cf4647c08e39df2ce1e09de0.scope - libcontainer container e9f4fe1f8799ede63f0c2482ae40290f8475e1a9cf4647c08e39df2ce1e09de0. Jan 14 23:45:14.100000 audit: BPF prog-id=88 op=LOAD Jan 14 23:45:14.100000 audit: BPF prog-id=89 op=LOAD Jan 14 23:45:14.100000 audit[2623]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=2605 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.100000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562613064623633666662636262623630366662656166353263346666 Jan 14 23:45:14.100000 audit: BPF prog-id=89 op=UNLOAD Jan 14 23:45:14.100000 audit[2623]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2605 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.100000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562613064623633666662636262623630366662656166353263346666 Jan 14 23:45:14.100000 audit: BPF prog-id=90 op=LOAD Jan 14 23:45:14.100000 audit[2623]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=2605 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.100000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562613064623633666662636262623630366662656166353263346666 Jan 14 23:45:14.100000 audit: BPF prog-id=91 op=LOAD Jan 14 23:45:14.100000 audit[2623]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=2605 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.100000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562613064623633666662636262623630366662656166353263346666 Jan 14 23:45:14.100000 audit: BPF prog-id=91 op=UNLOAD Jan 14 23:45:14.100000 audit[2623]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2605 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.100000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562613064623633666662636262623630366662656166353263346666 Jan 14 23:45:14.100000 audit: BPF prog-id=90 op=UNLOAD Jan 14 23:45:14.100000 audit[2623]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2605 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.100000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562613064623633666662636262623630366662656166353263346666 Jan 14 23:45:14.100000 audit: BPF prog-id=92 op=LOAD Jan 14 23:45:14.100000 audit[2623]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=2605 pid=2623 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.100000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562613064623633666662636262623630366662656166353263346666 Jan 14 23:45:14.108000 audit: BPF prog-id=93 op=LOAD Jan 14 23:45:14.109000 audit: BPF prog-id=94 op=LOAD Jan 14 23:45:14.109000 audit[2640]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2621 pid=2640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539663466653166383739396564653633663063323438326165343032 Jan 14 23:45:14.109000 audit: BPF prog-id=94 op=UNLOAD Jan 14 23:45:14.109000 audit[2640]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2621 pid=2640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539663466653166383739396564653633663063323438326165343032 Jan 14 23:45:14.109000 audit: BPF prog-id=95 op=LOAD Jan 14 23:45:14.109000 audit[2640]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2621 pid=2640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539663466653166383739396564653633663063323438326165343032 Jan 14 23:45:14.109000 audit: BPF prog-id=96 op=LOAD Jan 14 23:45:14.109000 audit[2640]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2621 pid=2640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539663466653166383739396564653633663063323438326165343032 Jan 14 23:45:14.109000 audit: BPF prog-id=96 op=UNLOAD Jan 14 23:45:14.109000 audit[2640]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2621 pid=2640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539663466653166383739396564653633663063323438326165343032 Jan 14 23:45:14.109000 audit: BPF prog-id=95 op=UNLOAD Jan 14 23:45:14.109000 audit[2640]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2621 pid=2640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539663466653166383739396564653633663063323438326165343032 Jan 14 23:45:14.109000 audit: BPF prog-id=97 op=LOAD Jan 14 23:45:14.109000 audit[2640]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2621 pid=2640 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6539663466653166383739396564653633663063323438326165343032 Jan 14 23:45:14.123509 containerd[1695]: time="2026-01-14T23:45:14.123454906Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4515-1-0-n-1d3be4f164,Uid:0b87770b8d26d1b1663c3229f1382cec,Namespace:kube-system,Attempt:0,} returns sandbox id \"73955e808bb581e4bdc7aa2d5959ccdb604f9ee318c1f32b7c04db31e6ab18b5\"" Jan 14 23:45:14.129329 containerd[1695]: time="2026-01-14T23:45:14.129132963Z" level=info msg="CreateContainer within sandbox \"73955e808bb581e4bdc7aa2d5959ccdb604f9ee318c1f32b7c04db31e6ab18b5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 14 23:45:14.131936 containerd[1695]: time="2026-01-14T23:45:14.131890092Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4515-1-0-n-1d3be4f164,Uid:2600c830ca674ed87b79b96ba000ed32,Namespace:kube-system,Attempt:0,} returns sandbox id \"eba0db63ffbcbbb606fbeaf52c4ff89d68c4a9d2019cbdbf216e3e64071abd95\"" Jan 14 23:45:14.134577 containerd[1695]: time="2026-01-14T23:45:14.134280139Z" level=info msg="CreateContainer within sandbox \"eba0db63ffbcbbb606fbeaf52c4ff89d68c4a9d2019cbdbf216e3e64071abd95\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 14 23:45:14.140091 containerd[1695]: time="2026-01-14T23:45:14.139891436Z" level=info msg="Container db5c219f9a2c0d3e9646e089d1e926b2f9aba6880c56630dad6061209602f04a: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:45:14.144035 containerd[1695]: time="2026-01-14T23:45:14.144004849Z" level=info msg="Container 8bea3fa176370ca67966494230d459d02bf23240301737428cb9f8cbb74ea206: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:45:14.144650 containerd[1695]: time="2026-01-14T23:45:14.144605651Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4515-1-0-n-1d3be4f164,Uid:a0fb221571f90a6b03ac373000837dfe,Namespace:kube-system,Attempt:0,} returns sandbox id \"e9f4fe1f8799ede63f0c2482ae40290f8475e1a9cf4647c08e39df2ce1e09de0\"" Jan 14 23:45:14.147872 containerd[1695]: time="2026-01-14T23:45:14.147839141Z" level=info msg="CreateContainer within sandbox \"e9f4fe1f8799ede63f0c2482ae40290f8475e1a9cf4647c08e39df2ce1e09de0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 14 23:45:14.149132 containerd[1695]: time="2026-01-14T23:45:14.149099984Z" level=info msg="CreateContainer within sandbox \"73955e808bb581e4bdc7aa2d5959ccdb604f9ee318c1f32b7c04db31e6ab18b5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"db5c219f9a2c0d3e9646e089d1e926b2f9aba6880c56630dad6061209602f04a\"" Jan 14 23:45:14.149807 containerd[1695]: time="2026-01-14T23:45:14.149781947Z" level=info msg="StartContainer for \"db5c219f9a2c0d3e9646e089d1e926b2f9aba6880c56630dad6061209602f04a\"" Jan 14 23:45:14.151281 containerd[1695]: time="2026-01-14T23:45:14.151004390Z" level=info msg="connecting to shim db5c219f9a2c0d3e9646e089d1e926b2f9aba6880c56630dad6061209602f04a" address="unix:///run/containerd/s/7e84ac729b0b3ea71e7a94c01c77c659ed3788ac652f51d62699bc5cf53d0528" protocol=ttrpc version=3 Jan 14 23:45:14.152701 containerd[1695]: time="2026-01-14T23:45:14.152667715Z" level=info msg="CreateContainer within sandbox \"eba0db63ffbcbbb606fbeaf52c4ff89d68c4a9d2019cbdbf216e3e64071abd95\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"8bea3fa176370ca67966494230d459d02bf23240301737428cb9f8cbb74ea206\"" Jan 14 23:45:14.153305 containerd[1695]: time="2026-01-14T23:45:14.153279597Z" level=info msg="StartContainer for \"8bea3fa176370ca67966494230d459d02bf23240301737428cb9f8cbb74ea206\"" Jan 14 23:45:14.154929 containerd[1695]: time="2026-01-14T23:45:14.154894202Z" level=info msg="connecting to shim 8bea3fa176370ca67966494230d459d02bf23240301737428cb9f8cbb74ea206" address="unix:///run/containerd/s/4e8ea3ad27e8d5810075e10b08c0e8d908f7a88ab430f2b490a585ea504e0a17" protocol=ttrpc version=3 Jan 14 23:45:14.161870 kubelet[2531]: E0114 23:45:14.161284 2531 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" interval="800ms" Jan 14 23:45:14.164100 containerd[1695]: time="2026-01-14T23:45:14.163181507Z" level=info msg="Container dff0729402de1bf5093fca769b0a24b587f7578a2d449728d4aa31ed19490d92: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:45:14.175488 systemd[1]: Started cri-containerd-8bea3fa176370ca67966494230d459d02bf23240301737428cb9f8cbb74ea206.scope - libcontainer container 8bea3fa176370ca67966494230d459d02bf23240301737428cb9f8cbb74ea206. Jan 14 23:45:14.176037 containerd[1695]: time="2026-01-14T23:45:14.175990666Z" level=info msg="CreateContainer within sandbox \"e9f4fe1f8799ede63f0c2482ae40290f8475e1a9cf4647c08e39df2ce1e09de0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"dff0729402de1bf5093fca769b0a24b587f7578a2d449728d4aa31ed19490d92\"" Jan 14 23:45:14.176445 systemd[1]: Started cri-containerd-db5c219f9a2c0d3e9646e089d1e926b2f9aba6880c56630dad6061209602f04a.scope - libcontainer container db5c219f9a2c0d3e9646e089d1e926b2f9aba6880c56630dad6061209602f04a. Jan 14 23:45:14.176788 containerd[1695]: time="2026-01-14T23:45:14.176760349Z" level=info msg="StartContainer for \"dff0729402de1bf5093fca769b0a24b587f7578a2d449728d4aa31ed19490d92\"" Jan 14 23:45:14.178648 containerd[1695]: time="2026-01-14T23:45:14.178613434Z" level=info msg="connecting to shim dff0729402de1bf5093fca769b0a24b587f7578a2d449728d4aa31ed19490d92" address="unix:///run/containerd/s/2ed5a4b04d5ba33a844034ea44ea6dc6a2bb7bc80a666fe573555a5ed2a8fae8" protocol=ttrpc version=3 Jan 14 23:45:14.189000 audit: BPF prog-id=98 op=LOAD Jan 14 23:45:14.190000 audit: BPF prog-id=99 op=LOAD Jan 14 23:45:14.190000 audit[2703]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2573 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.190000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462356332313966396132633064336539363436653038396431653932 Jan 14 23:45:14.190000 audit: BPF prog-id=99 op=UNLOAD Jan 14 23:45:14.190000 audit[2703]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.190000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462356332313966396132633064336539363436653038396431653932 Jan 14 23:45:14.190000 audit: BPF prog-id=100 op=LOAD Jan 14 23:45:14.190000 audit[2703]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2573 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.190000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462356332313966396132633064336539363436653038396431653932 Jan 14 23:45:14.190000 audit: BPF prog-id=101 op=LOAD Jan 14 23:45:14.190000 audit[2703]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=2573 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.190000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462356332313966396132633064336539363436653038396431653932 Jan 14 23:45:14.190000 audit: BPF prog-id=101 op=UNLOAD Jan 14 23:45:14.190000 audit[2703]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.190000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462356332313966396132633064336539363436653038396431653932 Jan 14 23:45:14.190000 audit: BPF prog-id=100 op=UNLOAD Jan 14 23:45:14.190000 audit[2703]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.190000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462356332313966396132633064336539363436653038396431653932 Jan 14 23:45:14.190000 audit: BPF prog-id=102 op=LOAD Jan 14 23:45:14.190000 audit[2703]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=2573 pid=2703 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.190000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6462356332313966396132633064336539363436653038396431653932 Jan 14 23:45:14.192000 audit: BPF prog-id=103 op=LOAD Jan 14 23:45:14.193000 audit: BPF prog-id=104 op=LOAD Jan 14 23:45:14.193000 audit[2711]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2605 pid=2711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862656133666131373633373063613637393636343934323330643435 Jan 14 23:45:14.193000 audit: BPF prog-id=104 op=UNLOAD Jan 14 23:45:14.193000 audit[2711]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2605 pid=2711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862656133666131373633373063613637393636343934323330643435 Jan 14 23:45:14.193000 audit: BPF prog-id=105 op=LOAD Jan 14 23:45:14.193000 audit[2711]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2605 pid=2711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862656133666131373633373063613637393636343934323330643435 Jan 14 23:45:14.193000 audit: BPF prog-id=106 op=LOAD Jan 14 23:45:14.193000 audit[2711]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2605 pid=2711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.193000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862656133666131373633373063613637393636343934323330643435 Jan 14 23:45:14.194000 audit: BPF prog-id=106 op=UNLOAD Jan 14 23:45:14.194000 audit[2711]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2605 pid=2711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862656133666131373633373063613637393636343934323330643435 Jan 14 23:45:14.194000 audit: BPF prog-id=105 op=UNLOAD Jan 14 23:45:14.194000 audit[2711]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2605 pid=2711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862656133666131373633373063613637393636343934323330643435 Jan 14 23:45:14.194000 audit: BPF prog-id=107 op=LOAD Jan 14 23:45:14.194000 audit[2711]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2605 pid=2711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.194000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862656133666131373633373063613637393636343934323330643435 Jan 14 23:45:14.203803 systemd[1]: Started cri-containerd-dff0729402de1bf5093fca769b0a24b587f7578a2d449728d4aa31ed19490d92.scope - libcontainer container dff0729402de1bf5093fca769b0a24b587f7578a2d449728d4aa31ed19490d92. Jan 14 23:45:14.219000 audit: BPF prog-id=108 op=LOAD Jan 14 23:45:14.220000 audit: BPF prog-id=109 op=LOAD Jan 14 23:45:14.220000 audit[2738]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2621 pid=2738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.220000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466663037323934303264653162663530393366636137363962306132 Jan 14 23:45:14.221000 audit: BPF prog-id=109 op=UNLOAD Jan 14 23:45:14.221000 audit[2738]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2621 pid=2738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466663037323934303264653162663530393366636137363962306132 Jan 14 23:45:14.221000 audit: BPF prog-id=110 op=LOAD Jan 14 23:45:14.221000 audit[2738]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2621 pid=2738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466663037323934303264653162663530393366636137363962306132 Jan 14 23:45:14.221000 audit: BPF prog-id=111 op=LOAD Jan 14 23:45:14.221000 audit[2738]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2621 pid=2738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466663037323934303264653162663530393366636137363962306132 Jan 14 23:45:14.221000 audit: BPF prog-id=111 op=UNLOAD Jan 14 23:45:14.221000 audit[2738]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2621 pid=2738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466663037323934303264653162663530393366636137363962306132 Jan 14 23:45:14.221000 audit: BPF prog-id=110 op=UNLOAD Jan 14 23:45:14.221000 audit[2738]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2621 pid=2738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466663037323934303264653162663530393366636137363962306132 Jan 14 23:45:14.221000 audit: BPF prog-id=112 op=LOAD Jan 14 23:45:14.221000 audit[2738]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2621 pid=2738 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:14.221000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6466663037323934303264653162663530393366636137363962306132 Jan 14 23:45:14.233976 containerd[1695]: time="2026-01-14T23:45:14.233877962Z" level=info msg="StartContainer for \"8bea3fa176370ca67966494230d459d02bf23240301737428cb9f8cbb74ea206\" returns successfully" Jan 14 23:45:14.237493 containerd[1695]: time="2026-01-14T23:45:14.237452453Z" level=info msg="StartContainer for \"db5c219f9a2c0d3e9646e089d1e926b2f9aba6880c56630dad6061209602f04a\" returns successfully" Jan 14 23:45:14.257644 containerd[1695]: time="2026-01-14T23:45:14.257597994Z" level=info msg="StartContainer for \"dff0729402de1bf5093fca769b0a24b587f7578a2d449728d4aa31ed19490d92\" returns successfully" Jan 14 23:45:14.327986 kubelet[2531]: I0114 23:45:14.327903 2531 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:14.586823 kubelet[2531]: E0114 23:45:14.586734 2531 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-n-1d3be4f164\" not found" node="ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:14.588228 kubelet[2531]: E0114 23:45:14.588204 2531 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-n-1d3be4f164\" not found" node="ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:14.589881 kubelet[2531]: E0114 23:45:14.589861 2531 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-n-1d3be4f164\" not found" node="ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:15.594678 kubelet[2531]: E0114 23:45:15.594647 2531 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-n-1d3be4f164\" not found" node="ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:15.595032 kubelet[2531]: E0114 23:45:15.594873 2531 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4515-1-0-n-1d3be4f164\" not found" node="ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:15.886883 kubelet[2531]: E0114 23:45:15.886750 2531 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4515-1-0-n-1d3be4f164\" not found" node="ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:16.056535 kubelet[2531]: I0114 23:45:16.056483 2531 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:16.056535 kubelet[2531]: E0114 23:45:16.056527 2531 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": node \"ci-4515-1-0-n-1d3be4f164\" not found" Jan 14 23:45:16.060034 kubelet[2531]: I0114 23:45:16.058891 2531 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:16.129178 kubelet[2531]: E0114 23:45:16.129010 2531 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4515-1-0-n-1d3be4f164\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:16.129178 kubelet[2531]: I0114 23:45:16.129046 2531 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:16.131432 kubelet[2531]: E0114 23:45:16.131363 2531 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515-1-0-n-1d3be4f164\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:16.131432 kubelet[2531]: I0114 23:45:16.131392 2531 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:16.133571 kubelet[2531]: E0114 23:45:16.133492 2531 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4515-1-0-n-1d3be4f164\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:16.554413 kubelet[2531]: I0114 23:45:16.554324 2531 apiserver.go:52] "Watching apiserver" Jan 14 23:45:16.559239 kubelet[2531]: I0114 23:45:16.559177 2531 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 23:45:16.592932 kubelet[2531]: I0114 23:45:16.592892 2531 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:16.596493 kubelet[2531]: E0114 23:45:16.596450 2531 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515-1-0-n-1d3be4f164\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:18.099379 systemd[1]: Reload requested from client PID 2807 ('systemctl') (unit session-9.scope)... Jan 14 23:45:18.099397 systemd[1]: Reloading... Jan 14 23:45:18.165363 zram_generator::config[2853]: No configuration found. Jan 14 23:45:18.346559 systemd[1]: Reloading finished in 246 ms. Jan 14 23:45:18.367711 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:45:18.380746 systemd[1]: kubelet.service: Deactivated successfully. Jan 14 23:45:18.381049 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:45:18.384410 kernel: kauditd_printk_skb: 200 callbacks suppressed Jan 14 23:45:18.384464 kernel: audit: type=1131 audit(1768434318.380:396): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:45:18.380000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:45:18.381126 systemd[1]: kubelet.service: Consumed 1.767s CPU time, 128M memory peak. Jan 14 23:45:18.382959 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 14 23:45:18.384000 audit: BPF prog-id=113 op=LOAD Jan 14 23:45:18.385623 kernel: audit: type=1334 audit(1768434318.384:397): prog-id=113 op=LOAD Jan 14 23:45:18.384000 audit: BPF prog-id=67 op=UNLOAD Jan 14 23:45:18.386613 kernel: audit: type=1334 audit(1768434318.384:398): prog-id=67 op=UNLOAD Jan 14 23:45:18.384000 audit: BPF prog-id=114 op=LOAD Jan 14 23:45:18.387494 kernel: audit: type=1334 audit(1768434318.384:399): prog-id=114 op=LOAD Jan 14 23:45:18.384000 audit: BPF prog-id=115 op=LOAD Jan 14 23:45:18.388381 kernel: audit: type=1334 audit(1768434318.384:400): prog-id=115 op=LOAD Jan 14 23:45:18.384000 audit: BPF prog-id=68 op=UNLOAD Jan 14 23:45:18.389427 kernel: audit: type=1334 audit(1768434318.384:401): prog-id=68 op=UNLOAD Jan 14 23:45:18.384000 audit: BPF prog-id=69 op=UNLOAD Jan 14 23:45:18.390349 kernel: audit: type=1334 audit(1768434318.384:402): prog-id=69 op=UNLOAD Jan 14 23:45:18.385000 audit: BPF prog-id=116 op=LOAD Jan 14 23:45:18.391287 kernel: audit: type=1334 audit(1768434318.385:403): prog-id=116 op=LOAD Jan 14 23:45:18.385000 audit: BPF prog-id=64 op=UNLOAD Jan 14 23:45:18.386000 audit: BPF prog-id=117 op=LOAD Jan 14 23:45:18.392306 kernel: audit: type=1334 audit(1768434318.385:404): prog-id=64 op=UNLOAD Jan 14 23:45:18.386000 audit: BPF prog-id=118 op=LOAD Jan 14 23:45:18.386000 audit: BPF prog-id=65 op=UNLOAD Jan 14 23:45:18.386000 audit: BPF prog-id=66 op=UNLOAD Jan 14 23:45:18.388000 audit: BPF prog-id=119 op=LOAD Jan 14 23:45:18.388000 audit: BPF prog-id=78 op=UNLOAD Jan 14 23:45:18.394295 kernel: audit: type=1334 audit(1768434318.386:405): prog-id=117 op=LOAD Jan 14 23:45:18.395000 audit: BPF prog-id=120 op=LOAD Jan 14 23:45:18.395000 audit: BPF prog-id=80 op=UNLOAD Jan 14 23:45:18.395000 audit: BPF prog-id=121 op=LOAD Jan 14 23:45:18.395000 audit: BPF prog-id=122 op=LOAD Jan 14 23:45:18.395000 audit: BPF prog-id=81 op=UNLOAD Jan 14 23:45:18.395000 audit: BPF prog-id=82 op=UNLOAD Jan 14 23:45:18.396000 audit: BPF prog-id=123 op=LOAD Jan 14 23:45:18.396000 audit: BPF prog-id=63 op=UNLOAD Jan 14 23:45:18.397000 audit: BPF prog-id=124 op=LOAD Jan 14 23:45:18.397000 audit: BPF prog-id=70 op=UNLOAD Jan 14 23:45:18.397000 audit: BPF prog-id=125 op=LOAD Jan 14 23:45:18.397000 audit: BPF prog-id=126 op=LOAD Jan 14 23:45:18.397000 audit: BPF prog-id=71 op=UNLOAD Jan 14 23:45:18.397000 audit: BPF prog-id=72 op=UNLOAD Jan 14 23:45:18.398000 audit: BPF prog-id=127 op=LOAD Jan 14 23:45:18.398000 audit: BPF prog-id=75 op=UNLOAD Jan 14 23:45:18.398000 audit: BPF prog-id=128 op=LOAD Jan 14 23:45:18.398000 audit: BPF prog-id=129 op=LOAD Jan 14 23:45:18.398000 audit: BPF prog-id=76 op=UNLOAD Jan 14 23:45:18.398000 audit: BPF prog-id=77 op=UNLOAD Jan 14 23:45:18.399000 audit: BPF prog-id=130 op=LOAD Jan 14 23:45:18.399000 audit: BPF prog-id=131 op=LOAD Jan 14 23:45:18.399000 audit: BPF prog-id=73 op=UNLOAD Jan 14 23:45:18.399000 audit: BPF prog-id=74 op=UNLOAD Jan 14 23:45:18.400000 audit: BPF prog-id=132 op=LOAD Jan 14 23:45:18.400000 audit: BPF prog-id=79 op=UNLOAD Jan 14 23:45:18.538832 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 14 23:45:18.538000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:45:18.547552 (kubelet)[2898]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 14 23:45:18.582830 kubelet[2898]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 23:45:18.582830 kubelet[2898]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 14 23:45:18.582830 kubelet[2898]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 14 23:45:18.583146 kubelet[2898]: I0114 23:45:18.582879 2898 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 14 23:45:18.589917 kubelet[2898]: I0114 23:45:18.589849 2898 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 14 23:45:18.589917 kubelet[2898]: I0114 23:45:18.589874 2898 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 14 23:45:18.590486 kubelet[2898]: I0114 23:45:18.590464 2898 server.go:954] "Client rotation is on, will bootstrap in background" Jan 14 23:45:18.591737 kubelet[2898]: I0114 23:45:18.591718 2898 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 14 23:45:18.593979 kubelet[2898]: I0114 23:45:18.593952 2898 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 14 23:45:18.597065 kubelet[2898]: I0114 23:45:18.597044 2898 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 14 23:45:18.599662 kubelet[2898]: I0114 23:45:18.599645 2898 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 14 23:45:18.599829 kubelet[2898]: I0114 23:45:18.599808 2898 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 14 23:45:18.602149 kubelet[2898]: I0114 23:45:18.599830 2898 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4515-1-0-n-1d3be4f164","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 14 23:45:18.602149 kubelet[2898]: I0114 23:45:18.602126 2898 topology_manager.go:138] "Creating topology manager with none policy" Jan 14 23:45:18.602318 kubelet[2898]: I0114 23:45:18.602178 2898 container_manager_linux.go:304] "Creating device plugin manager" Jan 14 23:45:18.602318 kubelet[2898]: I0114 23:45:18.602247 2898 state_mem.go:36] "Initialized new in-memory state store" Jan 14 23:45:18.602727 kubelet[2898]: I0114 23:45:18.602580 2898 kubelet.go:446] "Attempting to sync node with API server" Jan 14 23:45:18.602727 kubelet[2898]: I0114 23:45:18.602609 2898 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 14 23:45:18.602727 kubelet[2898]: I0114 23:45:18.602631 2898 kubelet.go:352] "Adding apiserver pod source" Jan 14 23:45:18.602727 kubelet[2898]: I0114 23:45:18.602641 2898 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 14 23:45:18.604757 kubelet[2898]: I0114 23:45:18.604733 2898 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 14 23:45:18.605218 kubelet[2898]: I0114 23:45:18.605194 2898 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 14 23:45:18.605771 kubelet[2898]: I0114 23:45:18.605693 2898 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 14 23:45:18.605771 kubelet[2898]: I0114 23:45:18.605735 2898 server.go:1287] "Started kubelet" Jan 14 23:45:18.606126 kubelet[2898]: I0114 23:45:18.606072 2898 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 14 23:45:18.606345 kubelet[2898]: I0114 23:45:18.606307 2898 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 14 23:45:18.606709 kubelet[2898]: I0114 23:45:18.605861 2898 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 14 23:45:18.607804 kubelet[2898]: I0114 23:45:18.607785 2898 server.go:479] "Adding debug handlers to kubelet server" Jan 14 23:45:18.608030 kubelet[2898]: I0114 23:45:18.607978 2898 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 14 23:45:18.608210 kubelet[2898]: I0114 23:45:18.608192 2898 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 14 23:45:18.609241 kubelet[2898]: I0114 23:45:18.608308 2898 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 14 23:45:18.609241 kubelet[2898]: I0114 23:45:18.608404 2898 reconciler.go:26] "Reconciler: start to sync state" Jan 14 23:45:18.609241 kubelet[2898]: I0114 23:45:18.609091 2898 factory.go:221] Registration of the systemd container factory successfully Jan 14 23:45:18.609241 kubelet[2898]: I0114 23:45:18.609172 2898 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 14 23:45:18.611642 kubelet[2898]: I0114 23:45:18.611598 2898 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 14 23:45:18.612259 kubelet[2898]: E0114 23:45:18.612233 2898 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4515-1-0-n-1d3be4f164\" not found" Jan 14 23:45:18.615044 kubelet[2898]: I0114 23:45:18.615005 2898 factory.go:221] Registration of the containerd container factory successfully Jan 14 23:45:18.622855 kubelet[2898]: E0114 23:45:18.619354 2898 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 14 23:45:18.629847 kubelet[2898]: I0114 23:45:18.629797 2898 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 14 23:45:18.630647 kubelet[2898]: I0114 23:45:18.630622 2898 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 14 23:45:18.630647 kubelet[2898]: I0114 23:45:18.630642 2898 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 14 23:45:18.630735 kubelet[2898]: I0114 23:45:18.630662 2898 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 14 23:45:18.630735 kubelet[2898]: I0114 23:45:18.630671 2898 kubelet.go:2382] "Starting kubelet main sync loop" Jan 14 23:45:18.630735 kubelet[2898]: E0114 23:45:18.630710 2898 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 14 23:45:18.662549 kubelet[2898]: I0114 23:45:18.662520 2898 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 14 23:45:18.662549 kubelet[2898]: I0114 23:45:18.662541 2898 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 14 23:45:18.662549 kubelet[2898]: I0114 23:45:18.662563 2898 state_mem.go:36] "Initialized new in-memory state store" Jan 14 23:45:18.662746 kubelet[2898]: I0114 23:45:18.662728 2898 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 14 23:45:18.662774 kubelet[2898]: I0114 23:45:18.662744 2898 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 14 23:45:18.662774 kubelet[2898]: I0114 23:45:18.662762 2898 policy_none.go:49] "None policy: Start" Jan 14 23:45:18.662774 kubelet[2898]: I0114 23:45:18.662769 2898 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 14 23:45:18.662833 kubelet[2898]: I0114 23:45:18.662778 2898 state_mem.go:35] "Initializing new in-memory state store" Jan 14 23:45:18.662880 kubelet[2898]: I0114 23:45:18.662869 2898 state_mem.go:75] "Updated machine memory state" Jan 14 23:45:18.666566 kubelet[2898]: I0114 23:45:18.666524 2898 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 14 23:45:18.666701 kubelet[2898]: I0114 23:45:18.666674 2898 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 14 23:45:18.666738 kubelet[2898]: I0114 23:45:18.666693 2898 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 14 23:45:18.667444 kubelet[2898]: I0114 23:45:18.667209 2898 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 14 23:45:18.668867 kubelet[2898]: E0114 23:45:18.668836 2898 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 14 23:45:18.731803 kubelet[2898]: I0114 23:45:18.731772 2898 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:18.732032 kubelet[2898]: I0114 23:45:18.731772 2898 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:18.732104 kubelet[2898]: I0114 23:45:18.731890 2898 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:18.769596 kubelet[2898]: I0114 23:45:18.769569 2898 kubelet_node_status.go:75] "Attempting to register node" node="ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:18.779666 kubelet[2898]: I0114 23:45:18.779527 2898 kubelet_node_status.go:124] "Node was previously registered" node="ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:18.779666 kubelet[2898]: I0114 23:45:18.779614 2898 kubelet_node_status.go:78] "Successfully registered node" node="ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:18.909933 kubelet[2898]: I0114 23:45:18.909802 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/0b87770b8d26d1b1663c3229f1382cec-ca-certs\") pod \"kube-apiserver-ci-4515-1-0-n-1d3be4f164\" (UID: \"0b87770b8d26d1b1663c3229f1382cec\") " pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:18.909933 kubelet[2898]: I0114 23:45:18.909839 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/0b87770b8d26d1b1663c3229f1382cec-k8s-certs\") pod \"kube-apiserver-ci-4515-1-0-n-1d3be4f164\" (UID: \"0b87770b8d26d1b1663c3229f1382cec\") " pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:18.909933 kubelet[2898]: I0114 23:45:18.909869 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/0b87770b8d26d1b1663c3229f1382cec-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4515-1-0-n-1d3be4f164\" (UID: \"0b87770b8d26d1b1663c3229f1382cec\") " pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:18.909933 kubelet[2898]: I0114 23:45:18.909888 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2600c830ca674ed87b79b96ba000ed32-ca-certs\") pod \"kube-controller-manager-ci-4515-1-0-n-1d3be4f164\" (UID: \"2600c830ca674ed87b79b96ba000ed32\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:18.909933 kubelet[2898]: I0114 23:45:18.909905 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/2600c830ca674ed87b79b96ba000ed32-flexvolume-dir\") pod \"kube-controller-manager-ci-4515-1-0-n-1d3be4f164\" (UID: \"2600c830ca674ed87b79b96ba000ed32\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:18.910528 kubelet[2898]: I0114 23:45:18.910494 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2600c830ca674ed87b79b96ba000ed32-k8s-certs\") pod \"kube-controller-manager-ci-4515-1-0-n-1d3be4f164\" (UID: \"2600c830ca674ed87b79b96ba000ed32\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:18.910561 kubelet[2898]: I0114 23:45:18.910531 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/2600c830ca674ed87b79b96ba000ed32-kubeconfig\") pod \"kube-controller-manager-ci-4515-1-0-n-1d3be4f164\" (UID: \"2600c830ca674ed87b79b96ba000ed32\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:18.910639 kubelet[2898]: I0114 23:45:18.910548 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/a0fb221571f90a6b03ac373000837dfe-kubeconfig\") pod \"kube-scheduler-ci-4515-1-0-n-1d3be4f164\" (UID: \"a0fb221571f90a6b03ac373000837dfe\") " pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:18.910639 kubelet[2898]: I0114 23:45:18.910579 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2600c830ca674ed87b79b96ba000ed32-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4515-1-0-n-1d3be4f164\" (UID: \"2600c830ca674ed87b79b96ba000ed32\") " pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:19.603836 kubelet[2898]: I0114 23:45:19.603776 2898 apiserver.go:52] "Watching apiserver" Jan 14 23:45:19.608510 kubelet[2898]: I0114 23:45:19.608482 2898 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 14 23:45:19.645807 update_engine[1674]: I20260114 23:45:19.645377 1674 update_attempter.cc:509] Updating boot flags... Jan 14 23:45:19.651037 kubelet[2898]: I0114 23:45:19.650936 2898 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:19.659114 kubelet[2898]: E0114 23:45:19.659074 2898 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4515-1-0-n-1d3be4f164\" already exists" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" Jan 14 23:45:19.673342 kubelet[2898]: I0114 23:45:19.672277 2898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" podStartSLOduration=1.672254074 podStartE2EDuration="1.672254074s" podCreationTimestamp="2026-01-14 23:45:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 23:45:19.671868113 +0000 UTC m=+1.121406667" watchObservedRunningTime="2026-01-14 23:45:19.672254074 +0000 UTC m=+1.121792628" Jan 14 23:45:19.696936 kubelet[2898]: I0114 23:45:19.695548 2898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" podStartSLOduration=1.6955312249999999 podStartE2EDuration="1.695531225s" podCreationTimestamp="2026-01-14 23:45:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 23:45:19.685532794 +0000 UTC m=+1.135071348" watchObservedRunningTime="2026-01-14 23:45:19.695531225 +0000 UTC m=+1.145069779" Jan 14 23:45:19.696936 kubelet[2898]: I0114 23:45:19.695687 2898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" podStartSLOduration=1.695680425 podStartE2EDuration="1.695680425s" podCreationTimestamp="2026-01-14 23:45:18 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 23:45:19.695325144 +0000 UTC m=+1.144863698" watchObservedRunningTime="2026-01-14 23:45:19.695680425 +0000 UTC m=+1.145218939" Jan 14 23:45:23.622743 kubelet[2898]: I0114 23:45:23.622712 2898 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 14 23:45:23.623080 containerd[1695]: time="2026-01-14T23:45:23.623040143Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 14 23:45:23.623325 kubelet[2898]: I0114 23:45:23.623305 2898 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 14 23:45:24.521537 systemd[1]: Created slice kubepods-besteffort-pod237b3c88_c19e_47d2_b4c1_e6a5d4a3526d.slice - libcontainer container kubepods-besteffort-pod237b3c88_c19e_47d2_b4c1_e6a5d4a3526d.slice. Jan 14 23:45:24.545965 kubelet[2898]: I0114 23:45:24.545920 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/237b3c88-c19e-47d2-b4c1-e6a5d4a3526d-xtables-lock\") pod \"kube-proxy-7hg9f\" (UID: \"237b3c88-c19e-47d2-b4c1-e6a5d4a3526d\") " pod="kube-system/kube-proxy-7hg9f" Jan 14 23:45:24.546098 kubelet[2898]: I0114 23:45:24.545976 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/237b3c88-c19e-47d2-b4c1-e6a5d4a3526d-lib-modules\") pod \"kube-proxy-7hg9f\" (UID: \"237b3c88-c19e-47d2-b4c1-e6a5d4a3526d\") " pod="kube-system/kube-proxy-7hg9f" Jan 14 23:45:24.546098 kubelet[2898]: I0114 23:45:24.545995 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/237b3c88-c19e-47d2-b4c1-e6a5d4a3526d-kube-proxy\") pod \"kube-proxy-7hg9f\" (UID: \"237b3c88-c19e-47d2-b4c1-e6a5d4a3526d\") " pod="kube-system/kube-proxy-7hg9f" Jan 14 23:45:24.546098 kubelet[2898]: I0114 23:45:24.546014 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zttmr\" (UniqueName: \"kubernetes.io/projected/237b3c88-c19e-47d2-b4c1-e6a5d4a3526d-kube-api-access-zttmr\") pod \"kube-proxy-7hg9f\" (UID: \"237b3c88-c19e-47d2-b4c1-e6a5d4a3526d\") " pod="kube-system/kube-proxy-7hg9f" Jan 14 23:45:24.696732 systemd[1]: Created slice kubepods-besteffort-pod549af1a4_d10d_41a8_bd81_9ce05836d164.slice - libcontainer container kubepods-besteffort-pod549af1a4_d10d_41a8_bd81_9ce05836d164.slice. Jan 14 23:45:24.747346 kubelet[2898]: I0114 23:45:24.747227 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/549af1a4-d10d-41a8-bd81-9ce05836d164-var-lib-calico\") pod \"tigera-operator-7dcd859c48-hg526\" (UID: \"549af1a4-d10d-41a8-bd81-9ce05836d164\") " pod="tigera-operator/tigera-operator-7dcd859c48-hg526" Jan 14 23:45:24.747346 kubelet[2898]: I0114 23:45:24.747343 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2ftvk\" (UniqueName: \"kubernetes.io/projected/549af1a4-d10d-41a8-bd81-9ce05836d164-kube-api-access-2ftvk\") pod \"tigera-operator-7dcd859c48-hg526\" (UID: \"549af1a4-d10d-41a8-bd81-9ce05836d164\") " pod="tigera-operator/tigera-operator-7dcd859c48-hg526" Jan 14 23:45:24.833035 containerd[1695]: time="2026-01-14T23:45:24.832936919Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7hg9f,Uid:237b3c88-c19e-47d2-b4c1-e6a5d4a3526d,Namespace:kube-system,Attempt:0,}" Jan 14 23:45:24.854833 containerd[1695]: time="2026-01-14T23:45:24.854788586Z" level=info msg="connecting to shim f086fe52eeac56cbc4f3b70be6ec876d874841f7084dbf19aa22b0aade7de760" address="unix:///run/containerd/s/e3cca59407d273440efbae0a22f08e1fceb1096831f8969065a53b6593c560ac" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:45:24.879485 systemd[1]: Started cri-containerd-f086fe52eeac56cbc4f3b70be6ec876d874841f7084dbf19aa22b0aade7de760.scope - libcontainer container f086fe52eeac56cbc4f3b70be6ec876d874841f7084dbf19aa22b0aade7de760. Jan 14 23:45:24.886000 audit: BPF prog-id=133 op=LOAD Jan 14 23:45:24.888786 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 14 23:45:24.888854 kernel: audit: type=1334 audit(1768434324.886:438): prog-id=133 op=LOAD Jan 14 23:45:24.889000 audit: BPF prog-id=134 op=LOAD Jan 14 23:45:24.890414 kernel: audit: type=1334 audit(1768434324.889:439): prog-id=134 op=LOAD Jan 14 23:45:24.890522 kernel: audit: type=1300 audit(1768434324.889:439): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2971 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:24.889000 audit[2984]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2971 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:24.889000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630383666653532656561633536636263346633623730626536656338 Jan 14 23:45:24.896603 kernel: audit: type=1327 audit(1768434324.889:439): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630383666653532656561633536636263346633623730626536656338 Jan 14 23:45:24.896679 kernel: audit: type=1334 audit(1768434324.889:440): prog-id=134 op=UNLOAD Jan 14 23:45:24.889000 audit: BPF prog-id=134 op=UNLOAD Jan 14 23:45:24.889000 audit[2984]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2971 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:24.900323 kernel: audit: type=1300 audit(1768434324.889:440): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2971 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:24.900380 kernel: audit: type=1327 audit(1768434324.889:440): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630383666653532656561633536636263346633623730626536656338 Jan 14 23:45:24.889000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630383666653532656561633536636263346633623730626536656338 Jan 14 23:45:24.889000 audit: BPF prog-id=135 op=LOAD Jan 14 23:45:24.889000 audit[2984]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2971 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:24.907183 kernel: audit: type=1334 audit(1768434324.889:441): prog-id=135 op=LOAD Jan 14 23:45:24.907243 kernel: audit: type=1300 audit(1768434324.889:441): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2971 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:24.889000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630383666653532656561633536636263346633623730626536656338 Jan 14 23:45:24.910165 kernel: audit: type=1327 audit(1768434324.889:441): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630383666653532656561633536636263346633623730626536656338 Jan 14 23:45:24.890000 audit: BPF prog-id=136 op=LOAD Jan 14 23:45:24.890000 audit[2984]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2971 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:24.890000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630383666653532656561633536636263346633623730626536656338 Jan 14 23:45:24.893000 audit: BPF prog-id=136 op=UNLOAD Jan 14 23:45:24.893000 audit[2984]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2971 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:24.893000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630383666653532656561633536636263346633623730626536656338 Jan 14 23:45:24.893000 audit: BPF prog-id=135 op=UNLOAD Jan 14 23:45:24.893000 audit[2984]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2971 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:24.893000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630383666653532656561633536636263346633623730626536656338 Jan 14 23:45:24.893000 audit: BPF prog-id=137 op=LOAD Jan 14 23:45:24.893000 audit[2984]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2971 pid=2984 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:24.893000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6630383666653532656561633536636263346633623730626536656338 Jan 14 23:45:24.920298 containerd[1695]: time="2026-01-14T23:45:24.920245585Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-7hg9f,Uid:237b3c88-c19e-47d2-b4c1-e6a5d4a3526d,Namespace:kube-system,Attempt:0,} returns sandbox id \"f086fe52eeac56cbc4f3b70be6ec876d874841f7084dbf19aa22b0aade7de760\"" Jan 14 23:45:24.922798 containerd[1695]: time="2026-01-14T23:45:24.922762033Z" level=info msg="CreateContainer within sandbox \"f086fe52eeac56cbc4f3b70be6ec876d874841f7084dbf19aa22b0aade7de760\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 14 23:45:24.934018 containerd[1695]: time="2026-01-14T23:45:24.933973347Z" level=info msg="Container 8881301d222e9933633de80700d1424bab8b9b61eb350ead786763ac04552c49: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:45:24.942256 containerd[1695]: time="2026-01-14T23:45:24.942216453Z" level=info msg="CreateContainer within sandbox \"f086fe52eeac56cbc4f3b70be6ec876d874841f7084dbf19aa22b0aade7de760\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"8881301d222e9933633de80700d1424bab8b9b61eb350ead786763ac04552c49\"" Jan 14 23:45:24.943067 containerd[1695]: time="2026-01-14T23:45:24.943031255Z" level=info msg="StartContainer for \"8881301d222e9933633de80700d1424bab8b9b61eb350ead786763ac04552c49\"" Jan 14 23:45:24.944764 containerd[1695]: time="2026-01-14T23:45:24.944725820Z" level=info msg="connecting to shim 8881301d222e9933633de80700d1424bab8b9b61eb350ead786763ac04552c49" address="unix:///run/containerd/s/e3cca59407d273440efbae0a22f08e1fceb1096831f8969065a53b6593c560ac" protocol=ttrpc version=3 Jan 14 23:45:24.964521 systemd[1]: Started cri-containerd-8881301d222e9933633de80700d1424bab8b9b61eb350ead786763ac04552c49.scope - libcontainer container 8881301d222e9933633de80700d1424bab8b9b61eb350ead786763ac04552c49. Jan 14 23:45:25.001555 containerd[1695]: time="2026-01-14T23:45:25.001514074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-hg526,Uid:549af1a4-d10d-41a8-bd81-9ce05836d164,Namespace:tigera-operator,Attempt:0,}" Jan 14 23:45:25.015000 audit: BPF prog-id=138 op=LOAD Jan 14 23:45:25.015000 audit[3010]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2971 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838383133303164323232653939333336333364653830373030643134 Jan 14 23:45:25.015000 audit: BPF prog-id=139 op=LOAD Jan 14 23:45:25.015000 audit[3010]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2971 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838383133303164323232653939333336333364653830373030643134 Jan 14 23:45:25.015000 audit: BPF prog-id=139 op=UNLOAD Jan 14 23:45:25.015000 audit[3010]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2971 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838383133303164323232653939333336333364653830373030643134 Jan 14 23:45:25.015000 audit: BPF prog-id=138 op=UNLOAD Jan 14 23:45:25.015000 audit[3010]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2971 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838383133303164323232653939333336333364653830373030643134 Jan 14 23:45:25.015000 audit: BPF prog-id=140 op=LOAD Jan 14 23:45:25.015000 audit[3010]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2971 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.015000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838383133303164323232653939333336333364653830373030643134 Jan 14 23:45:25.023098 containerd[1695]: time="2026-01-14T23:45:25.022889779Z" level=info msg="connecting to shim eb18736f7ce953dd0b89bc3f4370648bbfc6543fef46921f2d98a108f4465ef7" address="unix:///run/containerd/s/a973a0639db4dd5604a63ce03369335768b2c82b62b1d698552f1c66bd9bf38c" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:45:25.035856 containerd[1695]: time="2026-01-14T23:45:25.035788818Z" level=info msg="StartContainer for \"8881301d222e9933633de80700d1424bab8b9b61eb350ead786763ac04552c49\" returns successfully" Jan 14 23:45:25.049490 systemd[1]: Started cri-containerd-eb18736f7ce953dd0b89bc3f4370648bbfc6543fef46921f2d98a108f4465ef7.scope - libcontainer container eb18736f7ce953dd0b89bc3f4370648bbfc6543fef46921f2d98a108f4465ef7. Jan 14 23:45:25.060000 audit: BPF prog-id=141 op=LOAD Jan 14 23:45:25.061000 audit: BPF prog-id=142 op=LOAD Jan 14 23:45:25.061000 audit[3056]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=3039 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562313837333666376365393533646430623839626333663433373036 Jan 14 23:45:25.061000 audit: BPF prog-id=142 op=UNLOAD Jan 14 23:45:25.061000 audit[3056]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3039 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562313837333666376365393533646430623839626333663433373036 Jan 14 23:45:25.061000 audit: BPF prog-id=143 op=LOAD Jan 14 23:45:25.061000 audit[3056]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3039 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562313837333666376365393533646430623839626333663433373036 Jan 14 23:45:25.061000 audit: BPF prog-id=144 op=LOAD Jan 14 23:45:25.061000 audit[3056]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3039 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562313837333666376365393533646430623839626333663433373036 Jan 14 23:45:25.061000 audit: BPF prog-id=144 op=UNLOAD Jan 14 23:45:25.061000 audit[3056]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3039 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562313837333666376365393533646430623839626333663433373036 Jan 14 23:45:25.061000 audit: BPF prog-id=143 op=UNLOAD Jan 14 23:45:25.061000 audit[3056]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3039 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562313837333666376365393533646430623839626333663433373036 Jan 14 23:45:25.061000 audit: BPF prog-id=145 op=LOAD Jan 14 23:45:25.061000 audit[3056]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=3039 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.061000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562313837333666376365393533646430623839626333663433373036 Jan 14 23:45:25.087861 containerd[1695]: time="2026-01-14T23:45:25.087715697Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-hg526,Uid:549af1a4-d10d-41a8-bd81-9ce05836d164,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"eb18736f7ce953dd0b89bc3f4370648bbfc6543fef46921f2d98a108f4465ef7\"" Jan 14 23:45:25.090259 containerd[1695]: time="2026-01-14T23:45:25.089506303Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 14 23:45:25.192000 audit[3122]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3122 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:45:25.192000 audit[3122]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe73fb150 a2=0 a3=1 items=0 ppid=3023 pid=3122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.192000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 23:45:25.192000 audit[3123]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3123 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:45:25.192000 audit[3123]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffec4046b0 a2=0 a3=1 items=0 ppid=3023 pid=3123 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.192000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 14 23:45:25.196000 audit[3125]: NETFILTER_CFG table=nat:56 family=10 entries=1 op=nft_register_chain pid=3125 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:45:25.196000 audit[3125]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc7962340 a2=0 a3=1 items=0 ppid=3023 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.196000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 23:45:25.197000 audit[3124]: NETFILTER_CFG table=nat:57 family=2 entries=1 op=nft_register_chain pid=3124 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:45:25.197000 audit[3124]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe9d1cd20 a2=0 a3=1 items=0 ppid=3023 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.197000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 14 23:45:25.197000 audit[3127]: NETFILTER_CFG table=filter:58 family=10 entries=1 op=nft_register_chain pid=3127 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:45:25.197000 audit[3127]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffc3164f0 a2=0 a3=1 items=0 ppid=3023 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.197000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 23:45:25.198000 audit[3128]: NETFILTER_CFG table=filter:59 family=2 entries=1 op=nft_register_chain pid=3128 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:45:25.198000 audit[3128]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe9ba94a0 a2=0 a3=1 items=0 ppid=3023 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.198000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 14 23:45:25.297000 audit[3129]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3129 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:45:25.297000 audit[3129]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffff65db220 a2=0 a3=1 items=0 ppid=3023 pid=3129 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.297000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 23:45:25.298000 audit[3131]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3131 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:45:25.298000 audit[3131]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffcc9a57d0 a2=0 a3=1 items=0 ppid=3023 pid=3131 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.298000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 14 23:45:25.303000 audit[3134]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3134 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:45:25.303000 audit[3134]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff6ae4450 a2=0 a3=1 items=0 ppid=3023 pid=3134 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.303000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 14 23:45:25.304000 audit[3135]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3135 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:45:25.304000 audit[3135]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffeee30010 a2=0 a3=1 items=0 ppid=3023 pid=3135 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.304000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 23:45:25.306000 audit[3137]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3137 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:45:25.306000 audit[3137]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffde9658c0 a2=0 a3=1 items=0 ppid=3023 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.306000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 23:45:25.307000 audit[3138]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3138 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:45:25.307000 audit[3138]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc01c7dd0 a2=0 a3=1 items=0 ppid=3023 pid=3138 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.307000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 23:45:25.308000 audit[3140]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3140 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:45:25.308000 audit[3140]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd9de0ab0 a2=0 a3=1 items=0 ppid=3023 pid=3140 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.308000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 23:45:25.313000 audit[3143]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3143 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:45:25.313000 audit[3143]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe60f8730 a2=0 a3=1 items=0 ppid=3023 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.313000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 14 23:45:25.314000 audit[3144]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3144 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:45:25.314000 audit[3144]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff2e3d1c0 a2=0 a3=1 items=0 ppid=3023 pid=3144 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.314000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 23:45:25.316000 audit[3146]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3146 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:45:25.316000 audit[3146]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffc9878060 a2=0 a3=1 items=0 ppid=3023 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.316000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 23:45:25.317000 audit[3147]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3147 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:45:25.317000 audit[3147]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd90062d0 a2=0 a3=1 items=0 ppid=3023 pid=3147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.317000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 23:45:25.319000 audit[3149]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3149 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:45:25.319000 audit[3149]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe3c64030 a2=0 a3=1 items=0 ppid=3023 pid=3149 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.319000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 23:45:25.323000 audit[3152]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3152 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:45:25.323000 audit[3152]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc41cddb0 a2=0 a3=1 items=0 ppid=3023 pid=3152 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.323000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 23:45:25.326000 audit[3155]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3155 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:45:25.326000 audit[3155]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffeaffce30 a2=0 a3=1 items=0 ppid=3023 pid=3155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.326000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 23:45:25.326000 audit[3156]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3156 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:45:25.326000 audit[3156]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc57caea0 a2=0 a3=1 items=0 ppid=3023 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.326000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 23:45:25.328000 audit[3158]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3158 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:45:25.328000 audit[3158]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffed68c860 a2=0 a3=1 items=0 ppid=3023 pid=3158 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.328000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 23:45:25.332000 audit[3161]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3161 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:45:25.332000 audit[3161]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe68e9c50 a2=0 a3=1 items=0 ppid=3023 pid=3161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.332000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 23:45:25.333000 audit[3162]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3162 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:45:25.333000 audit[3162]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff89d4e60 a2=0 a3=1 items=0 ppid=3023 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.333000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 23:45:25.335000 audit[3164]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3164 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 14 23:45:25.335000 audit[3164]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffce3fcfb0 a2=0 a3=1 items=0 ppid=3023 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.335000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 23:45:25.355000 audit[3170]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3170 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:45:25.355000 audit[3170]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe3c7cd40 a2=0 a3=1 items=0 ppid=3023 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.355000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:45:25.363000 audit[3170]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3170 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:45:25.363000 audit[3170]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffe3c7cd40 a2=0 a3=1 items=0 ppid=3023 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.363000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:45:25.366000 audit[3175]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3175 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:45:25.366000 audit[3175]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffffd3856e0 a2=0 a3=1 items=0 ppid=3023 pid=3175 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.366000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 14 23:45:25.368000 audit[3177]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3177 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:45:25.368000 audit[3177]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffdb8d24c0 a2=0 a3=1 items=0 ppid=3023 pid=3177 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.368000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 14 23:45:25.371000 audit[3180]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3180 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:45:25.371000 audit[3180]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff4474610 a2=0 a3=1 items=0 ppid=3023 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.371000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 14 23:45:25.373000 audit[3181]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3181 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:45:25.373000 audit[3181]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff241ca20 a2=0 a3=1 items=0 ppid=3023 pid=3181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.373000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 14 23:45:25.376000 audit[3183]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3183 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:45:25.376000 audit[3183]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdc0a0a60 a2=0 a3=1 items=0 ppid=3023 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.376000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 14 23:45:25.376000 audit[3184]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3184 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:45:25.376000 audit[3184]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffef43ffd0 a2=0 a3=1 items=0 ppid=3023 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.376000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 14 23:45:25.378000 audit[3186]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3186 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:45:25.378000 audit[3186]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffcc3117c0 a2=0 a3=1 items=0 ppid=3023 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.378000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 14 23:45:25.383000 audit[3189]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3189 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:45:25.383000 audit[3189]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffdc789490 a2=0 a3=1 items=0 ppid=3023 pid=3189 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.383000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 14 23:45:25.383000 audit[3190]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3190 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:45:25.383000 audit[3190]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffa7005e0 a2=0 a3=1 items=0 ppid=3023 pid=3190 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.383000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 14 23:45:25.385000 audit[3192]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3192 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:45:25.385000 audit[3192]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffce31be70 a2=0 a3=1 items=0 ppid=3023 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.385000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 14 23:45:25.386000 audit[3193]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3193 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:45:25.386000 audit[3193]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd57e9eb0 a2=0 a3=1 items=0 ppid=3023 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.386000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 14 23:45:25.389000 audit[3195]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3195 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:45:25.389000 audit[3195]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc9691460 a2=0 a3=1 items=0 ppid=3023 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.389000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 14 23:45:25.392000 audit[3198]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3198 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:45:25.392000 audit[3198]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd22d5d20 a2=0 a3=1 items=0 ppid=3023 pid=3198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.392000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 14 23:45:25.396000 audit[3201]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3201 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:45:25.396000 audit[3201]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffed1d7410 a2=0 a3=1 items=0 ppid=3023 pid=3201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.396000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 14 23:45:25.397000 audit[3202]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3202 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:45:25.397000 audit[3202]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe40939e0 a2=0 a3=1 items=0 ppid=3023 pid=3202 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.397000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 14 23:45:25.399000 audit[3204]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3204 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:45:25.399000 audit[3204]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffe4b9b150 a2=0 a3=1 items=0 ppid=3023 pid=3204 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.399000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 23:45:25.402000 audit[3207]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3207 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:45:25.402000 audit[3207]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffffb6a080 a2=0 a3=1 items=0 ppid=3023 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.402000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 14 23:45:25.404000 audit[3208]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3208 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:45:25.404000 audit[3208]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffef8bf780 a2=0 a3=1 items=0 ppid=3023 pid=3208 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.404000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 14 23:45:25.406000 audit[3210]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3210 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:45:25.406000 audit[3210]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffffb9cf840 a2=0 a3=1 items=0 ppid=3023 pid=3210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.406000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 14 23:45:25.407000 audit[3211]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3211 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:45:25.407000 audit[3211]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd7e63e90 a2=0 a3=1 items=0 ppid=3023 pid=3211 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.407000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 14 23:45:25.409000 audit[3213]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3213 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:45:25.409000 audit[3213]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffffed00930 a2=0 a3=1 items=0 ppid=3023 pid=3213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.409000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 23:45:25.414000 audit[3216]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3216 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 14 23:45:25.414000 audit[3216]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff175c7e0 a2=0 a3=1 items=0 ppid=3023 pid=3216 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.414000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 14 23:45:25.417000 audit[3218]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3218 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 23:45:25.417000 audit[3218]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffd74f6d40 a2=0 a3=1 items=0 ppid=3023 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.417000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:45:25.417000 audit[3218]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3218 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 14 23:45:25.417000 audit[3218]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffd74f6d40 a2=0 a3=1 items=0 ppid=3023 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:25.417000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:45:26.734479 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount740565147.mount: Deactivated successfully. Jan 14 23:45:27.362404 containerd[1695]: time="2026-01-14T23:45:27.362352686Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:45:27.363303 containerd[1695]: time="2026-01-14T23:45:27.363242728Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Jan 14 23:45:27.364233 containerd[1695]: time="2026-01-14T23:45:27.364203851Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:45:27.366755 containerd[1695]: time="2026-01-14T23:45:27.366700299Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:45:27.367398 containerd[1695]: time="2026-01-14T23:45:27.367374061Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.277829278s" Jan 14 23:45:27.367439 containerd[1695]: time="2026-01-14T23:45:27.367404141Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 14 23:45:27.369235 containerd[1695]: time="2026-01-14T23:45:27.369207427Z" level=info msg="CreateContainer within sandbox \"eb18736f7ce953dd0b89bc3f4370648bbfc6543fef46921f2d98a108f4465ef7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 14 23:45:27.377101 containerd[1695]: time="2026-01-14T23:45:27.376828130Z" level=info msg="Container 146be48759f300999a13f366a5fca7d56c04434cbfef1155bdc146dc6ff8b1ae: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:45:27.382007 containerd[1695]: time="2026-01-14T23:45:27.381968146Z" level=info msg="CreateContainer within sandbox \"eb18736f7ce953dd0b89bc3f4370648bbfc6543fef46921f2d98a108f4465ef7\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"146be48759f300999a13f366a5fca7d56c04434cbfef1155bdc146dc6ff8b1ae\"" Jan 14 23:45:27.383311 containerd[1695]: time="2026-01-14T23:45:27.382861308Z" level=info msg="StartContainer for \"146be48759f300999a13f366a5fca7d56c04434cbfef1155bdc146dc6ff8b1ae\"" Jan 14 23:45:27.383797 containerd[1695]: time="2026-01-14T23:45:27.383758191Z" level=info msg="connecting to shim 146be48759f300999a13f366a5fca7d56c04434cbfef1155bdc146dc6ff8b1ae" address="unix:///run/containerd/s/a973a0639db4dd5604a63ce03369335768b2c82b62b1d698552f1c66bd9bf38c" protocol=ttrpc version=3 Jan 14 23:45:27.403841 systemd[1]: Started cri-containerd-146be48759f300999a13f366a5fca7d56c04434cbfef1155bdc146dc6ff8b1ae.scope - libcontainer container 146be48759f300999a13f366a5fca7d56c04434cbfef1155bdc146dc6ff8b1ae. Jan 14 23:45:27.413000 audit: BPF prog-id=146 op=LOAD Jan 14 23:45:27.413000 audit: BPF prog-id=147 op=LOAD Jan 14 23:45:27.413000 audit[3227]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=3039 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:27.413000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134366265343837353966333030393939613133663336366135666361 Jan 14 23:45:27.413000 audit: BPF prog-id=147 op=UNLOAD Jan 14 23:45:27.413000 audit[3227]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3039 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:27.413000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134366265343837353966333030393939613133663336366135666361 Jan 14 23:45:27.413000 audit: BPF prog-id=148 op=LOAD Jan 14 23:45:27.413000 audit[3227]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=3039 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:27.413000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134366265343837353966333030393939613133663336366135666361 Jan 14 23:45:27.414000 audit: BPF prog-id=149 op=LOAD Jan 14 23:45:27.414000 audit[3227]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=3039 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:27.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134366265343837353966333030393939613133663336366135666361 Jan 14 23:45:27.414000 audit: BPF prog-id=149 op=UNLOAD Jan 14 23:45:27.414000 audit[3227]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3039 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:27.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134366265343837353966333030393939613133663336366135666361 Jan 14 23:45:27.414000 audit: BPF prog-id=148 op=UNLOAD Jan 14 23:45:27.414000 audit[3227]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3039 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:27.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134366265343837353966333030393939613133663336366135666361 Jan 14 23:45:27.414000 audit: BPF prog-id=150 op=LOAD Jan 14 23:45:27.414000 audit[3227]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=3039 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:27.414000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3134366265343837353966333030393939613133663336366135666361 Jan 14 23:45:27.428806 containerd[1695]: time="2026-01-14T23:45:27.428773089Z" level=info msg="StartContainer for \"146be48759f300999a13f366a5fca7d56c04434cbfef1155bdc146dc6ff8b1ae\" returns successfully" Jan 14 23:45:27.683202 kubelet[2898]: I0114 23:45:27.682992 2898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-7hg9f" podStartSLOduration=3.682974185 podStartE2EDuration="3.682974185s" podCreationTimestamp="2026-01-14 23:45:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 23:45:25.6780527 +0000 UTC m=+7.127591254" watchObservedRunningTime="2026-01-14 23:45:27.682974185 +0000 UTC m=+9.132512739" Jan 14 23:45:29.908082 kubelet[2898]: I0114 23:45:29.908012 2898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" podStartSLOduration=3.6289293000000002 podStartE2EDuration="5.907997662s" podCreationTimestamp="2026-01-14 23:45:24 +0000 UTC" firstStartedPulling="2026-01-14 23:45:25.088984861 +0000 UTC m=+6.538523375" lastFinishedPulling="2026-01-14 23:45:27.368053183 +0000 UTC m=+8.817591737" observedRunningTime="2026-01-14 23:45:27.683245306 +0000 UTC m=+9.132783860" watchObservedRunningTime="2026-01-14 23:45:29.907997662 +0000 UTC m=+11.357536216" Jan 14 23:45:32.596484 sudo[1954]: pam_unix(sudo:session): session closed for user root Jan 14 23:45:32.595000 audit[1954]: USER_END pid=1954 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:45:32.599836 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 14 23:45:32.599961 kernel: audit: type=1106 audit(1768434332.595:518): pid=1954 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_unix,pam_permit,pam_systemd acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:45:32.595000 audit[1954]: CRED_DISP pid=1954 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:45:32.603569 kernel: audit: type=1104 audit(1768434332.595:519): pid=1954 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 14 23:45:32.693360 sshd[1953]: Connection closed by 20.161.92.111 port 32960 Jan 14 23:45:32.694636 sshd-session[1950]: pam_unix(sshd:session): session closed for user core Jan 14 23:45:32.695000 audit[1950]: USER_END pid=1950 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:45:32.699608 systemd[1]: sshd@8-10.0.22.230:22-20.161.92.111:32960.service: Deactivated successfully. Jan 14 23:45:32.695000 audit[1950]: CRED_DISP pid=1950 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:45:32.702801 systemd[1]: session-9.scope: Deactivated successfully. Jan 14 23:45:32.703463 kernel: audit: type=1106 audit(1768434332.695:520): pid=1950 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_lastlog,pam_limits,pam_env,pam_unix,pam_permit,pam_systemd,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:45:32.703713 kernel: audit: type=1104 audit(1768434332.695:521): pid=1950 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock,pam_unix acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 14 23:45:32.704347 systemd[1]: session-9.scope: Consumed 8.145s CPU time, 218.7M memory peak. Jan 14 23:45:32.700000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.22.230:22-20.161.92.111:32960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:45:32.707824 kernel: audit: type=1131 audit(1768434332.700:522): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.22.230:22-20.161.92.111:32960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 14 23:45:32.708156 systemd-logind[1670]: Session 9 logged out. Waiting for processes to exit. Jan 14 23:45:32.709261 systemd-logind[1670]: Removed session 9. Jan 14 23:45:34.049000 audit[3317]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:45:34.049000 audit[3317]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc5e0fb60 a2=0 a3=1 items=0 ppid=3023 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:34.056347 kernel: audit: type=1325 audit(1768434334.049:523): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:45:34.056422 kernel: audit: type=1300 audit(1768434334.049:523): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc5e0fb60 a2=0 a3=1 items=0 ppid=3023 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:34.049000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:45:34.061885 kernel: audit: type=1327 audit(1768434334.049:523): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:45:34.062012 kernel: audit: type=1325 audit(1768434334.057:524): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:45:34.057000 audit[3317]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3317 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:45:34.057000 audit[3317]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc5e0fb60 a2=0 a3=1 items=0 ppid=3023 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:34.068927 kernel: audit: type=1300 audit(1768434334.057:524): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc5e0fb60 a2=0 a3=1 items=0 ppid=3023 pid=3317 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:34.057000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:45:34.081000 audit[3319]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3319 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:45:34.081000 audit[3319]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffca01630 a2=0 a3=1 items=0 ppid=3023 pid=3319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:34.081000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:45:34.086000 audit[3319]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3319 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:45:34.086000 audit[3319]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffffca01630 a2=0 a3=1 items=0 ppid=3023 pid=3319 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:34.086000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:45:39.164000 audit[3321]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3321 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:45:39.166324 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 14 23:45:39.166391 kernel: audit: type=1325 audit(1768434339.164:527): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3321 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:45:39.164000 audit[3321]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffceafe740 a2=0 a3=1 items=0 ppid=3023 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:39.171980 kernel: audit: type=1300 audit(1768434339.164:527): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffceafe740 a2=0 a3=1 items=0 ppid=3023 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:39.164000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:45:39.174798 kernel: audit: type=1327 audit(1768434339.164:527): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:45:39.174000 audit[3321]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3321 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:45:39.174000 audit[3321]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffceafe740 a2=0 a3=1 items=0 ppid=3023 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:39.181213 kernel: audit: type=1325 audit(1768434339.174:528): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3321 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:45:39.181285 kernel: audit: type=1300 audit(1768434339.174:528): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffceafe740 a2=0 a3=1 items=0 ppid=3023 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:39.174000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:45:39.184396 kernel: audit: type=1327 audit(1768434339.174:528): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:45:39.201000 audit[3324]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3324 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:45:39.206292 kernel: audit: type=1325 audit(1768434339.201:529): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3324 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:45:39.206400 kernel: audit: type=1300 audit(1768434339.201:529): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe12c6940 a2=0 a3=1 items=0 ppid=3023 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:39.201000 audit[3324]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe12c6940 a2=0 a3=1 items=0 ppid=3023 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:39.201000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:45:39.211878 kernel: audit: type=1327 audit(1768434339.201:529): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:45:39.213000 audit[3324]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3324 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:45:39.213000 audit[3324]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe12c6940 a2=0 a3=1 items=0 ppid=3023 pid=3324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:39.213000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:45:39.217311 kernel: audit: type=1325 audit(1768434339.213:530): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3324 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:45:40.224000 audit[3326]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3326 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:45:40.224000 audit[3326]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffde7553d0 a2=0 a3=1 items=0 ppid=3023 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:40.224000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:45:40.235000 audit[3326]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3326 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:45:40.235000 audit[3326]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffde7553d0 a2=0 a3=1 items=0 ppid=3023 pid=3326 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:40.235000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:45:41.837000 audit[3328]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3328 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:45:41.837000 audit[3328]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd7489630 a2=0 a3=1 items=0 ppid=3023 pid=3328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:41.837000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:45:41.843000 audit[3328]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3328 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:45:41.843000 audit[3328]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd7489630 a2=0 a3=1 items=0 ppid=3023 pid=3328 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:41.843000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:45:41.873646 systemd[1]: Created slice kubepods-besteffort-pod96b6b9a4_7287_436f_bd04_497d8961f2b9.slice - libcontainer container kubepods-besteffort-pod96b6b9a4_7287_436f_bd04_497d8961f2b9.slice. Jan 14 23:45:41.947702 kubelet[2898]: I0114 23:45:41.947630 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/96b6b9a4-7287-436f-bd04-497d8961f2b9-typha-certs\") pod \"calico-typha-7c7bddb87-kdppc\" (UID: \"96b6b9a4-7287-436f-bd04-497d8961f2b9\") " pod="calico-system/calico-typha-7c7bddb87-kdppc" Jan 14 23:45:41.947702 kubelet[2898]: I0114 23:45:41.947674 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/96b6b9a4-7287-436f-bd04-497d8961f2b9-tigera-ca-bundle\") pod \"calico-typha-7c7bddb87-kdppc\" (UID: \"96b6b9a4-7287-436f-bd04-497d8961f2b9\") " pod="calico-system/calico-typha-7c7bddb87-kdppc" Jan 14 23:45:41.947702 kubelet[2898]: I0114 23:45:41.947698 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-44mtv\" (UniqueName: \"kubernetes.io/projected/96b6b9a4-7287-436f-bd04-497d8961f2b9-kube-api-access-44mtv\") pod \"calico-typha-7c7bddb87-kdppc\" (UID: \"96b6b9a4-7287-436f-bd04-497d8961f2b9\") " pod="calico-system/calico-typha-7c7bddb87-kdppc" Jan 14 23:45:42.041967 kubelet[2898]: I0114 23:45:42.041904 2898 status_manager.go:890] "Failed to get status for pod" podUID="503b0668-cb3b-4021-925d-4a6dd9c40b72" pod="calico-system/calico-node-pcznb" err="pods \"calico-node-pcznb\" is forbidden: User \"system:node:ci-4515-1-0-n-1d3be4f164\" cannot get resource \"pods\" in API group \"\" in the namespace \"calico-system\": no relationship found between node 'ci-4515-1-0-n-1d3be4f164' and this object" Jan 14 23:45:42.047735 systemd[1]: Created slice kubepods-besteffort-pod503b0668_cb3b_4021_925d_4a6dd9c40b72.slice - libcontainer container kubepods-besteffort-pod503b0668_cb3b_4021_925d_4a6dd9c40b72.slice. Jan 14 23:45:42.049399 kubelet[2898]: I0114 23:45:42.049334 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/503b0668-cb3b-4021-925d-4a6dd9c40b72-cni-bin-dir\") pod \"calico-node-pcznb\" (UID: \"503b0668-cb3b-4021-925d-4a6dd9c40b72\") " pod="calico-system/calico-node-pcznb" Jan 14 23:45:42.049399 kubelet[2898]: I0114 23:45:42.049375 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/503b0668-cb3b-4021-925d-4a6dd9c40b72-cni-log-dir\") pod \"calico-node-pcznb\" (UID: \"503b0668-cb3b-4021-925d-4a6dd9c40b72\") " pod="calico-system/calico-node-pcznb" Jan 14 23:45:42.049399 kubelet[2898]: I0114 23:45:42.049394 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/503b0668-cb3b-4021-925d-4a6dd9c40b72-flexvol-driver-host\") pod \"calico-node-pcznb\" (UID: \"503b0668-cb3b-4021-925d-4a6dd9c40b72\") " pod="calico-system/calico-node-pcznb" Jan 14 23:45:42.049526 kubelet[2898]: I0114 23:45:42.049415 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/503b0668-cb3b-4021-925d-4a6dd9c40b72-var-lib-calico\") pod \"calico-node-pcznb\" (UID: \"503b0668-cb3b-4021-925d-4a6dd9c40b72\") " pod="calico-system/calico-node-pcznb" Jan 14 23:45:42.049526 kubelet[2898]: I0114 23:45:42.049430 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/503b0668-cb3b-4021-925d-4a6dd9c40b72-var-run-calico\") pod \"calico-node-pcznb\" (UID: \"503b0668-cb3b-4021-925d-4a6dd9c40b72\") " pod="calico-system/calico-node-pcznb" Jan 14 23:45:42.049526 kubelet[2898]: I0114 23:45:42.049456 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/503b0668-cb3b-4021-925d-4a6dd9c40b72-node-certs\") pod \"calico-node-pcznb\" (UID: \"503b0668-cb3b-4021-925d-4a6dd9c40b72\") " pod="calico-system/calico-node-pcznb" Jan 14 23:45:42.049526 kubelet[2898]: I0114 23:45:42.049470 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/503b0668-cb3b-4021-925d-4a6dd9c40b72-policysync\") pod \"calico-node-pcznb\" (UID: \"503b0668-cb3b-4021-925d-4a6dd9c40b72\") " pod="calico-system/calico-node-pcznb" Jan 14 23:45:42.049526 kubelet[2898]: I0114 23:45:42.049487 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hp2qq\" (UniqueName: \"kubernetes.io/projected/503b0668-cb3b-4021-925d-4a6dd9c40b72-kube-api-access-hp2qq\") pod \"calico-node-pcznb\" (UID: \"503b0668-cb3b-4021-925d-4a6dd9c40b72\") " pod="calico-system/calico-node-pcznb" Jan 14 23:45:42.049627 kubelet[2898]: I0114 23:45:42.049504 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/503b0668-cb3b-4021-925d-4a6dd9c40b72-lib-modules\") pod \"calico-node-pcznb\" (UID: \"503b0668-cb3b-4021-925d-4a6dd9c40b72\") " pod="calico-system/calico-node-pcznb" Jan 14 23:45:42.049627 kubelet[2898]: I0114 23:45:42.049530 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/503b0668-cb3b-4021-925d-4a6dd9c40b72-xtables-lock\") pod \"calico-node-pcznb\" (UID: \"503b0668-cb3b-4021-925d-4a6dd9c40b72\") " pod="calico-system/calico-node-pcznb" Jan 14 23:45:42.049627 kubelet[2898]: I0114 23:45:42.049547 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/503b0668-cb3b-4021-925d-4a6dd9c40b72-cni-net-dir\") pod \"calico-node-pcznb\" (UID: \"503b0668-cb3b-4021-925d-4a6dd9c40b72\") " pod="calico-system/calico-node-pcznb" Jan 14 23:45:42.049627 kubelet[2898]: I0114 23:45:42.049565 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/503b0668-cb3b-4021-925d-4a6dd9c40b72-tigera-ca-bundle\") pod \"calico-node-pcznb\" (UID: \"503b0668-cb3b-4021-925d-4a6dd9c40b72\") " pod="calico-system/calico-node-pcznb" Jan 14 23:45:42.151526 kubelet[2898]: E0114 23:45:42.151424 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.151526 kubelet[2898]: W0114 23:45:42.151449 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.151526 kubelet[2898]: E0114 23:45:42.151468 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.153544 kubelet[2898]: E0114 23:45:42.153415 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.153544 kubelet[2898]: W0114 23:45:42.153434 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.153544 kubelet[2898]: E0114 23:45:42.153457 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.154567 kubelet[2898]: E0114 23:45:42.154551 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.154640 kubelet[2898]: W0114 23:45:42.154621 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.157358 kubelet[2898]: E0114 23:45:42.157316 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.157988 kubelet[2898]: E0114 23:45:42.157956 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.158158 kubelet[2898]: W0114 23:45:42.158064 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.158158 kubelet[2898]: E0114 23:45:42.158091 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.158814 kubelet[2898]: E0114 23:45:42.158789 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.158814 kubelet[2898]: W0114 23:45:42.158812 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.158901 kubelet[2898]: E0114 23:45:42.158834 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.159449 kubelet[2898]: E0114 23:45:42.159428 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.159497 kubelet[2898]: W0114 23:45:42.159444 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.159497 kubelet[2898]: E0114 23:45:42.159493 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.169059 kubelet[2898]: E0114 23:45:42.168977 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.169059 kubelet[2898]: W0114 23:45:42.169001 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.169059 kubelet[2898]: E0114 23:45:42.169021 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.179782 containerd[1695]: time="2026-01-14T23:45:42.179635108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c7bddb87-kdppc,Uid:96b6b9a4-7287-436f-bd04-497d8961f2b9,Namespace:calico-system,Attempt:0,}" Jan 14 23:45:42.206192 containerd[1695]: time="2026-01-14T23:45:42.206151589Z" level=info msg="connecting to shim 56ded7c76b7fadc2b01b4305c9c9f232af8d1b34347114fa2e2ca1f1a460820b" address="unix:///run/containerd/s/4f854f427d0d334a0876fd7b054ed41ed3634c89c2a44d805af4390f3cb2dc6b" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:45:42.237528 systemd[1]: Started cri-containerd-56ded7c76b7fadc2b01b4305c9c9f232af8d1b34347114fa2e2ca1f1a460820b.scope - libcontainer container 56ded7c76b7fadc2b01b4305c9c9f232af8d1b34347114fa2e2ca1f1a460820b. Jan 14 23:45:42.246436 kubelet[2898]: E0114 23:45:42.246378 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:45:42.257000 audit: BPF prog-id=151 op=LOAD Jan 14 23:45:42.258000 audit: BPF prog-id=152 op=LOAD Jan 14 23:45:42.258000 audit[3360]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3348 pid=3360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:42.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536646564376337366237666164633262303162343330356339633966 Jan 14 23:45:42.258000 audit: BPF prog-id=152 op=UNLOAD Jan 14 23:45:42.258000 audit[3360]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3348 pid=3360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:42.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536646564376337366237666164633262303162343330356339633966 Jan 14 23:45:42.258000 audit: BPF prog-id=153 op=LOAD Jan 14 23:45:42.258000 audit[3360]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3348 pid=3360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:42.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536646564376337366237666164633262303162343330356339633966 Jan 14 23:45:42.258000 audit: BPF prog-id=154 op=LOAD Jan 14 23:45:42.258000 audit[3360]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3348 pid=3360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:42.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536646564376337366237666164633262303162343330356339633966 Jan 14 23:45:42.258000 audit: BPF prog-id=154 op=UNLOAD Jan 14 23:45:42.258000 audit[3360]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3348 pid=3360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:42.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536646564376337366237666164633262303162343330356339633966 Jan 14 23:45:42.258000 audit: BPF prog-id=153 op=UNLOAD Jan 14 23:45:42.258000 audit[3360]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3348 pid=3360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:42.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536646564376337366237666164633262303162343330356339633966 Jan 14 23:45:42.258000 audit: BPF prog-id=155 op=LOAD Jan 14 23:45:42.258000 audit[3360]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3348 pid=3360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:42.258000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3536646564376337366237666164633262303162343330356339633966 Jan 14 23:45:42.285121 containerd[1695]: time="2026-01-14T23:45:42.285062270Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7c7bddb87-kdppc,Uid:96b6b9a4-7287-436f-bd04-497d8961f2b9,Namespace:calico-system,Attempt:0,} returns sandbox id \"56ded7c76b7fadc2b01b4305c9c9f232af8d1b34347114fa2e2ca1f1a460820b\"" Jan 14 23:45:42.286843 containerd[1695]: time="2026-01-14T23:45:42.286763195Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 14 23:45:42.344046 kubelet[2898]: E0114 23:45:42.343969 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.344046 kubelet[2898]: W0114 23:45:42.343995 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.344046 kubelet[2898]: E0114 23:45:42.344015 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.344537 kubelet[2898]: E0114 23:45:42.344512 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.344664 kubelet[2898]: W0114 23:45:42.344649 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.344830 kubelet[2898]: E0114 23:45:42.344732 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.345099 kubelet[2898]: E0114 23:45:42.345084 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.345393 kubelet[2898]: W0114 23:45:42.345138 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.345393 kubelet[2898]: E0114 23:45:42.345312 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.345611 kubelet[2898]: E0114 23:45:42.345596 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.345671 kubelet[2898]: W0114 23:45:42.345659 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.345729 kubelet[2898]: E0114 23:45:42.345719 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.346019 kubelet[2898]: E0114 23:45:42.345924 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.346019 kubelet[2898]: W0114 23:45:42.345935 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.346019 kubelet[2898]: E0114 23:45:42.345945 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.346174 kubelet[2898]: E0114 23:45:42.346161 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.346223 kubelet[2898]: W0114 23:45:42.346213 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.346301 kubelet[2898]: E0114 23:45:42.346290 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.346570 kubelet[2898]: E0114 23:45:42.346472 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.346570 kubelet[2898]: W0114 23:45:42.346483 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.346570 kubelet[2898]: E0114 23:45:42.346492 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.346728 kubelet[2898]: E0114 23:45:42.346714 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.346782 kubelet[2898]: W0114 23:45:42.346772 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.346828 kubelet[2898]: E0114 23:45:42.346819 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.347020 kubelet[2898]: E0114 23:45:42.347007 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.347087 kubelet[2898]: W0114 23:45:42.347076 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.347138 kubelet[2898]: E0114 23:45:42.347126 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.347332 kubelet[2898]: E0114 23:45:42.347319 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.347411 kubelet[2898]: W0114 23:45:42.347398 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.347461 kubelet[2898]: E0114 23:45:42.347452 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.347681 kubelet[2898]: E0114 23:45:42.347647 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.347740 kubelet[2898]: W0114 23:45:42.347659 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.347793 kubelet[2898]: E0114 23:45:42.347780 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.348132 kubelet[2898]: E0114 23:45:42.348117 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.348210 kubelet[2898]: W0114 23:45:42.348198 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.348276 kubelet[2898]: E0114 23:45:42.348253 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.348615 kubelet[2898]: E0114 23:45:42.348512 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.348615 kubelet[2898]: W0114 23:45:42.348524 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.348615 kubelet[2898]: E0114 23:45:42.348534 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.348788 kubelet[2898]: E0114 23:45:42.348774 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.348847 kubelet[2898]: W0114 23:45:42.348835 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.348902 kubelet[2898]: E0114 23:45:42.348891 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.349191 kubelet[2898]: E0114 23:45:42.349101 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.349191 kubelet[2898]: W0114 23:45:42.349113 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.349191 kubelet[2898]: E0114 23:45:42.349122 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.349353 kubelet[2898]: E0114 23:45:42.349341 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.349466 kubelet[2898]: W0114 23:45:42.349453 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.349520 kubelet[2898]: E0114 23:45:42.349509 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.349833 kubelet[2898]: E0114 23:45:42.349738 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.349833 kubelet[2898]: W0114 23:45:42.349752 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.349833 kubelet[2898]: E0114 23:45:42.349762 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.349987 kubelet[2898]: E0114 23:45:42.349974 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.350047 kubelet[2898]: W0114 23:45:42.350036 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.350100 kubelet[2898]: E0114 23:45:42.350090 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.350419 kubelet[2898]: E0114 23:45:42.350321 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.350419 kubelet[2898]: W0114 23:45:42.350333 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.350419 kubelet[2898]: E0114 23:45:42.350343 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.350572 kubelet[2898]: E0114 23:45:42.350559 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.350636 kubelet[2898]: W0114 23:45:42.350625 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.350740 kubelet[2898]: E0114 23:45:42.350725 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.352113 kubelet[2898]: E0114 23:45:42.352061 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.352380 kubelet[2898]: W0114 23:45:42.352251 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.352524 containerd[1695]: time="2026-01-14T23:45:42.352429436Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pcznb,Uid:503b0668-cb3b-4021-925d-4a6dd9c40b72,Namespace:calico-system,Attempt:0,}" Jan 14 23:45:42.352753 kubelet[2898]: E0114 23:45:42.352592 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.352753 kubelet[2898]: I0114 23:45:42.352705 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/5c454d6a-8fe3-46dd-a39b-d216b7be481d-socket-dir\") pod \"csi-node-driver-2lqxs\" (UID: \"5c454d6a-8fe3-46dd-a39b-d216b7be481d\") " pod="calico-system/csi-node-driver-2lqxs" Jan 14 23:45:42.353108 kubelet[2898]: E0114 23:45:42.353093 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.353190 kubelet[2898]: W0114 23:45:42.353177 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.353413 kubelet[2898]: E0114 23:45:42.353391 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.353608 kubelet[2898]: E0114 23:45:42.353594 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.353691 kubelet[2898]: W0114 23:45:42.353661 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.353807 kubelet[2898]: E0114 23:45:42.353752 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.353807 kubelet[2898]: I0114 23:45:42.353790 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/5c454d6a-8fe3-46dd-a39b-d216b7be481d-varrun\") pod \"csi-node-driver-2lqxs\" (UID: \"5c454d6a-8fe3-46dd-a39b-d216b7be481d\") " pod="calico-system/csi-node-driver-2lqxs" Jan 14 23:45:42.354104 kubelet[2898]: E0114 23:45:42.354053 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.354104 kubelet[2898]: W0114 23:45:42.354066 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.354104 kubelet[2898]: E0114 23:45:42.354076 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.354490 kubelet[2898]: E0114 23:45:42.354474 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.354861 kubelet[2898]: W0114 23:45:42.354538 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.354861 kubelet[2898]: E0114 23:45:42.354563 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.354861 kubelet[2898]: E0114 23:45:42.354743 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.354861 kubelet[2898]: W0114 23:45:42.354753 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.354861 kubelet[2898]: E0114 23:45:42.354773 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.355288 kubelet[2898]: E0114 23:45:42.355099 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.355288 kubelet[2898]: W0114 23:45:42.355125 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.355288 kubelet[2898]: E0114 23:45:42.355137 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.355288 kubelet[2898]: I0114 23:45:42.355162 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wx4j2\" (UniqueName: \"kubernetes.io/projected/5c454d6a-8fe3-46dd-a39b-d216b7be481d-kube-api-access-wx4j2\") pod \"csi-node-driver-2lqxs\" (UID: \"5c454d6a-8fe3-46dd-a39b-d216b7be481d\") " pod="calico-system/csi-node-driver-2lqxs" Jan 14 23:45:42.355542 kubelet[2898]: E0114 23:45:42.355512 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.355639 kubelet[2898]: W0114 23:45:42.355594 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.356463 kubelet[2898]: E0114 23:45:42.356446 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.356618 kubelet[2898]: I0114 23:45:42.356587 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/5c454d6a-8fe3-46dd-a39b-d216b7be481d-kubelet-dir\") pod \"csi-node-driver-2lqxs\" (UID: \"5c454d6a-8fe3-46dd-a39b-d216b7be481d\") " pod="calico-system/csi-node-driver-2lqxs" Jan 14 23:45:42.357127 kubelet[2898]: E0114 23:45:42.357024 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.357635 kubelet[2898]: W0114 23:45:42.357604 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.357724 kubelet[2898]: E0114 23:45:42.357709 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.358240 kubelet[2898]: E0114 23:45:42.358218 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.358240 kubelet[2898]: W0114 23:45:42.358237 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.358547 kubelet[2898]: E0114 23:45:42.358257 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.358547 kubelet[2898]: I0114 23:45:42.358341 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/5c454d6a-8fe3-46dd-a39b-d216b7be481d-registration-dir\") pod \"csi-node-driver-2lqxs\" (UID: \"5c454d6a-8fe3-46dd-a39b-d216b7be481d\") " pod="calico-system/csi-node-driver-2lqxs" Jan 14 23:45:42.359618 kubelet[2898]: E0114 23:45:42.359506 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.359618 kubelet[2898]: W0114 23:45:42.359523 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.359618 kubelet[2898]: E0114 23:45:42.359540 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.359785 kubelet[2898]: E0114 23:45:42.359773 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.359840 kubelet[2898]: W0114 23:45:42.359828 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.359902 kubelet[2898]: E0114 23:45:42.359881 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.360185 kubelet[2898]: E0114 23:45:42.360156 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.360185 kubelet[2898]: W0114 23:45:42.360171 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.360418 kubelet[2898]: E0114 23:45:42.360398 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.360768 kubelet[2898]: E0114 23:45:42.360522 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.360768 kubelet[2898]: W0114 23:45:42.360531 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.360768 kubelet[2898]: E0114 23:45:42.360541 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.361046 kubelet[2898]: E0114 23:45:42.360937 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.361046 kubelet[2898]: W0114 23:45:42.360949 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.361046 kubelet[2898]: E0114 23:45:42.360960 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.378117 containerd[1695]: time="2026-01-14T23:45:42.378072834Z" level=info msg="connecting to shim 63e7aedb5481343e24930635c30b37aed60ef91c4461788c321a9b867779b555" address="unix:///run/containerd/s/90a1ee0e11147462d1d9eb2e096e972e658978c10625898413a99a44161de7c3" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:45:42.402592 systemd[1]: Started cri-containerd-63e7aedb5481343e24930635c30b37aed60ef91c4461788c321a9b867779b555.scope - libcontainer container 63e7aedb5481343e24930635c30b37aed60ef91c4461788c321a9b867779b555. Jan 14 23:45:42.411000 audit: BPF prog-id=156 op=LOAD Jan 14 23:45:42.411000 audit: BPF prog-id=157 op=LOAD Jan 14 23:45:42.411000 audit[3461]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3450 pid=3461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:42.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633653761656462353438313334336532343933303633356333306233 Jan 14 23:45:42.411000 audit: BPF prog-id=157 op=UNLOAD Jan 14 23:45:42.411000 audit[3461]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3450 pid=3461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:42.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633653761656462353438313334336532343933303633356333306233 Jan 14 23:45:42.411000 audit: BPF prog-id=158 op=LOAD Jan 14 23:45:42.411000 audit[3461]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3450 pid=3461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:42.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633653761656462353438313334336532343933303633356333306233 Jan 14 23:45:42.411000 audit: BPF prog-id=159 op=LOAD Jan 14 23:45:42.411000 audit[3461]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3450 pid=3461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:42.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633653761656462353438313334336532343933303633356333306233 Jan 14 23:45:42.411000 audit: BPF prog-id=159 op=UNLOAD Jan 14 23:45:42.411000 audit[3461]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3450 pid=3461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:42.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633653761656462353438313334336532343933303633356333306233 Jan 14 23:45:42.411000 audit: BPF prog-id=158 op=UNLOAD Jan 14 23:45:42.411000 audit[3461]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3450 pid=3461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:42.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633653761656462353438313334336532343933303633356333306233 Jan 14 23:45:42.411000 audit: BPF prog-id=160 op=LOAD Jan 14 23:45:42.411000 audit[3461]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3450 pid=3461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:42.411000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633653761656462353438313334336532343933303633356333306233 Jan 14 23:45:42.424807 containerd[1695]: time="2026-01-14T23:45:42.424770377Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-pcznb,Uid:503b0668-cb3b-4021-925d-4a6dd9c40b72,Namespace:calico-system,Attempt:0,} returns sandbox id \"63e7aedb5481343e24930635c30b37aed60ef91c4461788c321a9b867779b555\"" Jan 14 23:45:42.460497 kubelet[2898]: E0114 23:45:42.460458 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.460497 kubelet[2898]: W0114 23:45:42.460484 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.460497 kubelet[2898]: E0114 23:45:42.460505 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.461025 kubelet[2898]: E0114 23:45:42.460754 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.461025 kubelet[2898]: W0114 23:45:42.460763 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.461025 kubelet[2898]: E0114 23:45:42.460778 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.461025 kubelet[2898]: E0114 23:45:42.460974 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.461025 kubelet[2898]: W0114 23:45:42.460982 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.461025 kubelet[2898]: E0114 23:45:42.460996 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.461236 kubelet[2898]: E0114 23:45:42.461144 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.461236 kubelet[2898]: W0114 23:45:42.461152 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.461236 kubelet[2898]: E0114 23:45:42.461164 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.461443 kubelet[2898]: E0114 23:45:42.461314 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.461443 kubelet[2898]: W0114 23:45:42.461322 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.461443 kubelet[2898]: E0114 23:45:42.461334 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.461592 kubelet[2898]: E0114 23:45:42.461577 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.461648 kubelet[2898]: W0114 23:45:42.461637 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.461714 kubelet[2898]: E0114 23:45:42.461702 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.461867 kubelet[2898]: E0114 23:45:42.461849 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.461867 kubelet[2898]: W0114 23:45:42.461862 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.461931 kubelet[2898]: E0114 23:45:42.461877 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.462041 kubelet[2898]: E0114 23:45:42.462027 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.462041 kubelet[2898]: W0114 23:45:42.462038 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.462105 kubelet[2898]: E0114 23:45:42.462050 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.462176 kubelet[2898]: E0114 23:45:42.462166 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.462176 kubelet[2898]: W0114 23:45:42.462175 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.462233 kubelet[2898]: E0114 23:45:42.462201 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.462322 kubelet[2898]: E0114 23:45:42.462310 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.462322 kubelet[2898]: W0114 23:45:42.462321 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.462429 kubelet[2898]: E0114 23:45:42.462388 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.462496 kubelet[2898]: E0114 23:45:42.462482 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.462496 kubelet[2898]: W0114 23:45:42.462493 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.462642 kubelet[2898]: E0114 23:45:42.462618 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.462642 kubelet[2898]: W0114 23:45:42.462631 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.462642 kubelet[2898]: E0114 23:45:42.462635 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.462857 kubelet[2898]: E0114 23:45:42.462659 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.463624 kubelet[2898]: E0114 23:45:42.463606 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.463624 kubelet[2898]: W0114 23:45:42.463619 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.463700 kubelet[2898]: E0114 23:45:42.463636 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.463816 kubelet[2898]: E0114 23:45:42.463796 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.463816 kubelet[2898]: W0114 23:45:42.463806 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.463869 kubelet[2898]: E0114 23:45:42.463843 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.463980 kubelet[2898]: E0114 23:45:42.463969 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.464018 kubelet[2898]: W0114 23:45:42.463980 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.464018 kubelet[2898]: E0114 23:45:42.464005 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.464115 kubelet[2898]: E0114 23:45:42.464105 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.464142 kubelet[2898]: W0114 23:45:42.464115 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.464170 kubelet[2898]: E0114 23:45:42.464138 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.464252 kubelet[2898]: E0114 23:45:42.464241 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.464252 kubelet[2898]: W0114 23:45:42.464250 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.464459 kubelet[2898]: E0114 23:45:42.464301 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.464459 kubelet[2898]: E0114 23:45:42.464375 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.464459 kubelet[2898]: W0114 23:45:42.464382 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.464459 kubelet[2898]: E0114 23:45:42.464452 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.464604 kubelet[2898]: E0114 23:45:42.464561 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.464604 kubelet[2898]: W0114 23:45:42.464571 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.464604 kubelet[2898]: E0114 23:45:42.464584 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.464786 kubelet[2898]: E0114 23:45:42.464745 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.464786 kubelet[2898]: W0114 23:45:42.464753 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.464786 kubelet[2898]: E0114 23:45:42.464766 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.464961 kubelet[2898]: E0114 23:45:42.464948 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.464961 kubelet[2898]: W0114 23:45:42.464960 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.465019 kubelet[2898]: E0114 23:45:42.464973 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.465113 kubelet[2898]: E0114 23:45:42.465101 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.465113 kubelet[2898]: W0114 23:45:42.465112 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.465172 kubelet[2898]: E0114 23:45:42.465124 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.465532 kubelet[2898]: E0114 23:45:42.465316 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.465532 kubelet[2898]: W0114 23:45:42.465326 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.465532 kubelet[2898]: E0114 23:45:42.465346 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.465532 kubelet[2898]: E0114 23:45:42.465494 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.465532 kubelet[2898]: W0114 23:45:42.465503 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.465532 kubelet[2898]: E0114 23:45:42.465511 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.466253 kubelet[2898]: E0114 23:45:42.465771 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.466253 kubelet[2898]: W0114 23:45:42.465793 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.466253 kubelet[2898]: E0114 23:45:42.465804 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.474358 kubelet[2898]: E0114 23:45:42.474329 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:42.474358 kubelet[2898]: W0114 23:45:42.474351 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:42.474478 kubelet[2898]: E0114 23:45:42.474372 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:42.853000 audit[3515]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3515 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:45:42.853000 audit[3515]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffdb6744e0 a2=0 a3=1 items=0 ppid=3023 pid=3515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:42.853000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:45:42.859000 audit[3515]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3515 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:45:42.859000 audit[3515]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdb6744e0 a2=0 a3=1 items=0 ppid=3023 pid=3515 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:42.859000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:45:43.631544 kubelet[2898]: E0114 23:45:43.631479 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:45:43.748149 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount905911446.mount: Deactivated successfully. Jan 14 23:45:44.392403 containerd[1695]: time="2026-01-14T23:45:44.392305387Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:45:44.393120 containerd[1695]: time="2026-01-14T23:45:44.393077309Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Jan 14 23:45:44.393920 containerd[1695]: time="2026-01-14T23:45:44.393897552Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:45:44.395948 containerd[1695]: time="2026-01-14T23:45:44.395903038Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:45:44.396653 containerd[1695]: time="2026-01-14T23:45:44.396428080Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.109631604s" Jan 14 23:45:44.396653 containerd[1695]: time="2026-01-14T23:45:44.396458840Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 14 23:45:44.397956 containerd[1695]: time="2026-01-14T23:45:44.397803644Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 14 23:45:44.406558 containerd[1695]: time="2026-01-14T23:45:44.406523351Z" level=info msg="CreateContainer within sandbox \"56ded7c76b7fadc2b01b4305c9c9f232af8d1b34347114fa2e2ca1f1a460820b\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 14 23:45:44.415866 containerd[1695]: time="2026-01-14T23:45:44.415155097Z" level=info msg="Container 1a886fe9a36aa5882eed142cb72ba7bd00e55ece373d8e18d62f36143f95869a: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:45:44.423817 containerd[1695]: time="2026-01-14T23:45:44.423779123Z" level=info msg="CreateContainer within sandbox \"56ded7c76b7fadc2b01b4305c9c9f232af8d1b34347114fa2e2ca1f1a460820b\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"1a886fe9a36aa5882eed142cb72ba7bd00e55ece373d8e18d62f36143f95869a\"" Jan 14 23:45:44.424188 containerd[1695]: time="2026-01-14T23:45:44.424160484Z" level=info msg="StartContainer for \"1a886fe9a36aa5882eed142cb72ba7bd00e55ece373d8e18d62f36143f95869a\"" Jan 14 23:45:44.425289 containerd[1695]: time="2026-01-14T23:45:44.425249248Z" level=info msg="connecting to shim 1a886fe9a36aa5882eed142cb72ba7bd00e55ece373d8e18d62f36143f95869a" address="unix:///run/containerd/s/4f854f427d0d334a0876fd7b054ed41ed3634c89c2a44d805af4390f3cb2dc6b" protocol=ttrpc version=3 Jan 14 23:45:44.444483 systemd[1]: Started cri-containerd-1a886fe9a36aa5882eed142cb72ba7bd00e55ece373d8e18d62f36143f95869a.scope - libcontainer container 1a886fe9a36aa5882eed142cb72ba7bd00e55ece373d8e18d62f36143f95869a. Jan 14 23:45:44.455000 audit: BPF prog-id=161 op=LOAD Jan 14 23:45:44.457409 kernel: kauditd_printk_skb: 64 callbacks suppressed Jan 14 23:45:44.457435 kernel: audit: type=1334 audit(1768434344.455:553): prog-id=161 op=LOAD Jan 14 23:45:44.456000 audit: BPF prog-id=162 op=LOAD Jan 14 23:45:44.456000 audit[3526]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3348 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:44.462044 kernel: audit: type=1334 audit(1768434344.456:554): prog-id=162 op=LOAD Jan 14 23:45:44.462180 kernel: audit: type=1300 audit(1768434344.456:554): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3348 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:44.462223 kernel: audit: type=1327 audit(1768434344.456:554): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161383836666539613336616135383832656564313432636237326261 Jan 14 23:45:44.456000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161383836666539613336616135383832656564313432636237326261 Jan 14 23:45:44.457000 audit: BPF prog-id=162 op=UNLOAD Jan 14 23:45:44.465991 kernel: audit: type=1334 audit(1768434344.457:555): prog-id=162 op=UNLOAD Jan 14 23:45:44.457000 audit[3526]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3348 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:44.469084 kernel: audit: type=1300 audit(1768434344.457:555): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3348 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:44.457000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161383836666539613336616135383832656564313432636237326261 Jan 14 23:45:44.472344 kernel: audit: type=1327 audit(1768434344.457:555): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161383836666539613336616135383832656564313432636237326261 Jan 14 23:45:44.457000 audit: BPF prog-id=163 op=LOAD Jan 14 23:45:44.474668 kernel: audit: type=1334 audit(1768434344.457:556): prog-id=163 op=LOAD Jan 14 23:45:44.474821 kernel: audit: type=1300 audit(1768434344.457:556): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3348 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:44.457000 audit[3526]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3348 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:44.457000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161383836666539613336616135383832656564313432636237326261 Jan 14 23:45:44.481766 kernel: audit: type=1327 audit(1768434344.457:556): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161383836666539613336616135383832656564313432636237326261 Jan 14 23:45:44.457000 audit: BPF prog-id=164 op=LOAD Jan 14 23:45:44.457000 audit[3526]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3348 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:44.457000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161383836666539613336616135383832656564313432636237326261 Jan 14 23:45:44.460000 audit: BPF prog-id=164 op=UNLOAD Jan 14 23:45:44.460000 audit[3526]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3348 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:44.460000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161383836666539613336616135383832656564313432636237326261 Jan 14 23:45:44.460000 audit: BPF prog-id=163 op=UNLOAD Jan 14 23:45:44.460000 audit[3526]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3348 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:44.460000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161383836666539613336616135383832656564313432636237326261 Jan 14 23:45:44.460000 audit: BPF prog-id=165 op=LOAD Jan 14 23:45:44.460000 audit[3526]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3348 pid=3526 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:44.460000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161383836666539613336616135383832656564313432636237326261 Jan 14 23:45:44.498623 containerd[1695]: time="2026-01-14T23:45:44.498556952Z" level=info msg="StartContainer for \"1a886fe9a36aa5882eed142cb72ba7bd00e55ece373d8e18d62f36143f95869a\" returns successfully" Jan 14 23:45:44.721589 kubelet[2898]: I0114 23:45:44.721306 2898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7c7bddb87-kdppc" podStartSLOduration=1.6105563040000002 podStartE2EDuration="3.721291352s" podCreationTimestamp="2026-01-14 23:45:41 +0000 UTC" firstStartedPulling="2026-01-14 23:45:42.286440834 +0000 UTC m=+23.735979388" lastFinishedPulling="2026-01-14 23:45:44.397175882 +0000 UTC m=+25.846714436" observedRunningTime="2026-01-14 23:45:44.720360269 +0000 UTC m=+26.169898823" watchObservedRunningTime="2026-01-14 23:45:44.721291352 +0000 UTC m=+26.170829906" Jan 14 23:45:44.767254 kubelet[2898]: E0114 23:45:44.767214 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:44.767254 kubelet[2898]: W0114 23:45:44.767239 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:44.767414 kubelet[2898]: E0114 23:45:44.767261 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:44.767517 kubelet[2898]: E0114 23:45:44.767504 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:44.767553 kubelet[2898]: W0114 23:45:44.767514 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:44.767580 kubelet[2898]: E0114 23:45:44.767557 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:44.767735 kubelet[2898]: E0114 23:45:44.767726 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:44.767763 kubelet[2898]: W0114 23:45:44.767736 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:44.767763 kubelet[2898]: E0114 23:45:44.767744 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:44.767886 kubelet[2898]: E0114 23:45:44.767874 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:44.767886 kubelet[2898]: W0114 23:45:44.767884 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:44.767936 kubelet[2898]: E0114 23:45:44.767892 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:44.768035 kubelet[2898]: E0114 23:45:44.768024 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:44.768035 kubelet[2898]: W0114 23:45:44.768033 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:44.768083 kubelet[2898]: E0114 23:45:44.768041 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:44.768162 kubelet[2898]: E0114 23:45:44.768153 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:44.768184 kubelet[2898]: W0114 23:45:44.768163 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:44.768184 kubelet[2898]: E0114 23:45:44.768170 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:44.768312 kubelet[2898]: E0114 23:45:44.768302 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:44.768344 kubelet[2898]: W0114 23:45:44.768312 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:44.768344 kubelet[2898]: E0114 23:45:44.768320 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:44.768460 kubelet[2898]: E0114 23:45:44.768450 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:44.768510 kubelet[2898]: W0114 23:45:44.768499 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:44.768542 kubelet[2898]: E0114 23:45:44.768513 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:44.768729 kubelet[2898]: E0114 23:45:44.768714 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:44.768729 kubelet[2898]: W0114 23:45:44.768726 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:44.768791 kubelet[2898]: E0114 23:45:44.768736 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:44.768888 kubelet[2898]: E0114 23:45:44.768876 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:44.768916 kubelet[2898]: W0114 23:45:44.768888 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:44.768916 kubelet[2898]: E0114 23:45:44.768895 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:44.769025 kubelet[2898]: E0114 23:45:44.769015 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:44.769052 kubelet[2898]: W0114 23:45:44.769025 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:44.769052 kubelet[2898]: E0114 23:45:44.769033 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:44.769152 kubelet[2898]: E0114 23:45:44.769142 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:44.769177 kubelet[2898]: W0114 23:45:44.769152 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:44.769177 kubelet[2898]: E0114 23:45:44.769159 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:44.769316 kubelet[2898]: E0114 23:45:44.769303 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:44.769316 kubelet[2898]: W0114 23:45:44.769313 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:44.769366 kubelet[2898]: E0114 23:45:44.769321 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:44.769457 kubelet[2898]: E0114 23:45:44.769446 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:44.769457 kubelet[2898]: W0114 23:45:44.769456 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:44.769501 kubelet[2898]: E0114 23:45:44.769464 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:44.769589 kubelet[2898]: E0114 23:45:44.769579 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:44.769615 kubelet[2898]: W0114 23:45:44.769588 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:44.769615 kubelet[2898]: E0114 23:45:44.769596 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:44.777067 kubelet[2898]: E0114 23:45:44.777037 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:44.777067 kubelet[2898]: W0114 23:45:44.777056 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:44.777145 kubelet[2898]: E0114 23:45:44.777070 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:44.777258 kubelet[2898]: E0114 23:45:44.777230 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:44.777258 kubelet[2898]: W0114 23:45:44.777253 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:44.777329 kubelet[2898]: E0114 23:45:44.777278 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:44.777498 kubelet[2898]: E0114 23:45:44.777469 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:44.777498 kubelet[2898]: W0114 23:45:44.777482 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:44.777589 kubelet[2898]: E0114 23:45:44.777500 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:44.777654 kubelet[2898]: E0114 23:45:44.777642 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:44.777724 kubelet[2898]: W0114 23:45:44.777654 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:44.777724 kubelet[2898]: E0114 23:45:44.777664 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:44.777816 kubelet[2898]: E0114 23:45:44.777804 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:44.777844 kubelet[2898]: W0114 23:45:44.777817 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:44.777844 kubelet[2898]: E0114 23:45:44.777830 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:44.778016 kubelet[2898]: E0114 23:45:44.777988 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:44.778016 kubelet[2898]: W0114 23:45:44.778001 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:44.778016 kubelet[2898]: E0114 23:45:44.778014 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:44.778181 kubelet[2898]: E0114 23:45:44.778170 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:44.778181 kubelet[2898]: W0114 23:45:44.778181 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:44.778226 kubelet[2898]: E0114 23:45:44.778193 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:44.778427 kubelet[2898]: E0114 23:45:44.778409 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:44.778465 kubelet[2898]: W0114 23:45:44.778425 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:44.778465 kubelet[2898]: E0114 23:45:44.778443 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:44.778586 kubelet[2898]: E0114 23:45:44.778574 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:44.778586 kubelet[2898]: W0114 23:45:44.778584 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:44.778635 kubelet[2898]: E0114 23:45:44.778607 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:44.778715 kubelet[2898]: E0114 23:45:44.778703 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:44.778715 kubelet[2898]: W0114 23:45:44.778713 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:44.778835 kubelet[2898]: E0114 23:45:44.778746 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:44.778835 kubelet[2898]: E0114 23:45:44.778827 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:44.778835 kubelet[2898]: W0114 23:45:44.778834 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:44.778907 kubelet[2898]: E0114 23:45:44.778846 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:44.778988 kubelet[2898]: E0114 23:45:44.778975 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:44.778988 kubelet[2898]: W0114 23:45:44.778985 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:44.779155 kubelet[2898]: E0114 23:45:44.778997 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:44.779234 kubelet[2898]: E0114 23:45:44.779220 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:44.779297 kubelet[2898]: W0114 23:45:44.779285 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:44.779356 kubelet[2898]: E0114 23:45:44.779345 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:44.779573 kubelet[2898]: E0114 23:45:44.779549 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:44.779771 kubelet[2898]: W0114 23:45:44.779640 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:44.779771 kubelet[2898]: E0114 23:45:44.779667 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:44.779892 kubelet[2898]: E0114 23:45:44.779879 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:44.779943 kubelet[2898]: W0114 23:45:44.779932 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:44.780001 kubelet[2898]: E0114 23:45:44.779990 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:44.780224 kubelet[2898]: E0114 23:45:44.780196 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:44.780224 kubelet[2898]: W0114 23:45:44.780211 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:44.780224 kubelet[2898]: E0114 23:45:44.780223 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:44.780510 kubelet[2898]: E0114 23:45:44.780490 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:44.780510 kubelet[2898]: W0114 23:45:44.780503 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:44.780578 kubelet[2898]: E0114 23:45:44.780520 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:44.780691 kubelet[2898]: E0114 23:45:44.780675 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:44.780691 kubelet[2898]: W0114 23:45:44.780688 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:44.780751 kubelet[2898]: E0114 23:45:44.780698 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:45.631520 kubelet[2898]: E0114 23:45:45.631467 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:45:45.712170 kubelet[2898]: I0114 23:45:45.712127 2898 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 23:45:45.785262 kubelet[2898]: E0114 23:45:45.785219 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:45.785262 kubelet[2898]: W0114 23:45:45.785245 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:45.785627 kubelet[2898]: E0114 23:45:45.785285 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:45.785627 kubelet[2898]: E0114 23:45:45.785460 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:45.785627 kubelet[2898]: W0114 23:45:45.785467 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:45.785627 kubelet[2898]: E0114 23:45:45.785475 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:45.785710 kubelet[2898]: E0114 23:45:45.785631 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:45.785710 kubelet[2898]: W0114 23:45:45.785638 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:45.785710 kubelet[2898]: E0114 23:45:45.785647 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:45.785804 kubelet[2898]: E0114 23:45:45.785787 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:45.785804 kubelet[2898]: W0114 23:45:45.785798 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:45.785853 kubelet[2898]: E0114 23:45:45.785805 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:45.786049 kubelet[2898]: E0114 23:45:45.786024 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:45.786049 kubelet[2898]: W0114 23:45:45.786040 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:45.786049 kubelet[2898]: E0114 23:45:45.786050 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:45.786224 kubelet[2898]: E0114 23:45:45.786200 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:45.786224 kubelet[2898]: W0114 23:45:45.786213 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:45.786224 kubelet[2898]: E0114 23:45:45.786221 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:45.786367 kubelet[2898]: E0114 23:45:45.786355 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:45.786367 kubelet[2898]: W0114 23:45:45.786366 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:45.786420 kubelet[2898]: E0114 23:45:45.786374 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:45.786515 kubelet[2898]: E0114 23:45:45.786504 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:45.786539 kubelet[2898]: W0114 23:45:45.786515 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:45.786539 kubelet[2898]: E0114 23:45:45.786522 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:45.786689 kubelet[2898]: E0114 23:45:45.786665 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:45.786689 kubelet[2898]: W0114 23:45:45.786678 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:45.786689 kubelet[2898]: E0114 23:45:45.786685 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:45.786824 kubelet[2898]: E0114 23:45:45.786813 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:45.786824 kubelet[2898]: W0114 23:45:45.786823 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:45.786871 kubelet[2898]: E0114 23:45:45.786831 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:45.786967 kubelet[2898]: E0114 23:45:45.786957 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:45.786989 kubelet[2898]: W0114 23:45:45.786967 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:45.786989 kubelet[2898]: E0114 23:45:45.786975 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:45.787167 kubelet[2898]: E0114 23:45:45.787155 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:45.787190 kubelet[2898]: W0114 23:45:45.787169 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:45.787190 kubelet[2898]: E0114 23:45:45.787181 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:45.787365 kubelet[2898]: E0114 23:45:45.787354 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:45.787388 kubelet[2898]: W0114 23:45:45.787366 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:45.787388 kubelet[2898]: E0114 23:45:45.787375 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:45.787545 kubelet[2898]: E0114 23:45:45.787532 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:45.787545 kubelet[2898]: W0114 23:45:45.787543 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:45.787598 kubelet[2898]: E0114 23:45:45.787551 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:45.787695 kubelet[2898]: E0114 23:45:45.787682 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:45.787695 kubelet[2898]: W0114 23:45:45.787693 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:45.787745 kubelet[2898]: E0114 23:45:45.787701 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:45.831678 kubelet[2898]: E0114 23:45:45.831636 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:45.831678 kubelet[2898]: W0114 23:45:45.831661 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:45.831678 kubelet[2898]: E0114 23:45:45.831682 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:45.831888 kubelet[2898]: E0114 23:45:45.831874 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:45.831888 kubelet[2898]: W0114 23:45:45.831886 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:45.831947 kubelet[2898]: E0114 23:45:45.831902 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:45.832199 kubelet[2898]: E0114 23:45:45.832183 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:45.832199 kubelet[2898]: W0114 23:45:45.832198 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:45.832251 kubelet[2898]: E0114 23:45:45.832215 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:45.832430 kubelet[2898]: E0114 23:45:45.832416 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:45.832456 kubelet[2898]: W0114 23:45:45.832429 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:45.832456 kubelet[2898]: E0114 23:45:45.832444 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:45.832595 kubelet[2898]: E0114 23:45:45.832584 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:45.832595 kubelet[2898]: W0114 23:45:45.832593 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:45.832661 kubelet[2898]: E0114 23:45:45.832606 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:45.832752 kubelet[2898]: E0114 23:45:45.832740 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:45.832752 kubelet[2898]: W0114 23:45:45.832750 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:45.832796 kubelet[2898]: E0114 23:45:45.832765 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:45.832969 kubelet[2898]: E0114 23:45:45.832944 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:45.832994 kubelet[2898]: W0114 23:45:45.832970 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:45.833014 kubelet[2898]: E0114 23:45:45.833000 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:45.833918 kubelet[2898]: E0114 23:45:45.833376 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:45.833918 kubelet[2898]: W0114 23:45:45.833417 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:45.833918 kubelet[2898]: E0114 23:45:45.833449 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:45.833918 kubelet[2898]: E0114 23:45:45.833591 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:45.833918 kubelet[2898]: W0114 23:45:45.833601 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:45.833918 kubelet[2898]: E0114 23:45:45.833623 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:45.833918 kubelet[2898]: E0114 23:45:45.833731 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:45.833918 kubelet[2898]: W0114 23:45:45.833741 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:45.833918 kubelet[2898]: E0114 23:45:45.833757 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:45.833918 kubelet[2898]: E0114 23:45:45.833926 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:45.834177 kubelet[2898]: W0114 23:45:45.833935 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:45.834177 kubelet[2898]: E0114 23:45:45.833950 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:45.834177 kubelet[2898]: E0114 23:45:45.834117 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:45.834177 kubelet[2898]: W0114 23:45:45.834126 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:45.834177 kubelet[2898]: E0114 23:45:45.834141 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:45.834336 kubelet[2898]: E0114 23:45:45.834313 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:45.834336 kubelet[2898]: W0114 23:45:45.834327 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:45.834380 kubelet[2898]: E0114 23:45:45.834341 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:45.834543 kubelet[2898]: E0114 23:45:45.834524 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:45.834570 kubelet[2898]: W0114 23:45:45.834544 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:45.834570 kubelet[2898]: E0114 23:45:45.834564 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:45.834716 kubelet[2898]: E0114 23:45:45.834706 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:45.834739 kubelet[2898]: W0114 23:45:45.834716 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:45.834739 kubelet[2898]: E0114 23:45:45.834729 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:45.835440 kubelet[2898]: E0114 23:45:45.835410 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:45.835440 kubelet[2898]: W0114 23:45:45.835432 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:45.835514 kubelet[2898]: E0114 23:45:45.835468 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:45.835703 kubelet[2898]: E0114 23:45:45.835681 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:45.835741 kubelet[2898]: W0114 23:45:45.835727 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:45.835768 kubelet[2898]: E0114 23:45:45.835742 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:45.836171 kubelet[2898]: E0114 23:45:45.836152 2898 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 14 23:45:45.836171 kubelet[2898]: W0114 23:45:45.836170 2898 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 14 23:45:45.836232 kubelet[2898]: E0114 23:45:45.836183 2898 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 14 23:45:46.013278 containerd[1695]: time="2026-01-14T23:45:46.013223778Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:45:46.014215 containerd[1695]: time="2026-01-14T23:45:46.014064901Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=4262566" Jan 14 23:45:46.015420 containerd[1695]: time="2026-01-14T23:45:46.015382825Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:45:46.017977 containerd[1695]: time="2026-01-14T23:45:46.017950833Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:45:46.018751 containerd[1695]: time="2026-01-14T23:45:46.018721075Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.620887391s" Jan 14 23:45:46.018786 containerd[1695]: time="2026-01-14T23:45:46.018754835Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 14 23:45:46.020851 containerd[1695]: time="2026-01-14T23:45:46.020819522Z" level=info msg="CreateContainer within sandbox \"63e7aedb5481343e24930635c30b37aed60ef91c4461788c321a9b867779b555\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 14 23:45:46.030322 containerd[1695]: time="2026-01-14T23:45:46.028979266Z" level=info msg="Container d57264076b3d81072e57cc47e096c3d3c094d88b1ab4bfe2ccd5d4bf4736a3cd: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:45:46.038618 containerd[1695]: time="2026-01-14T23:45:46.038571496Z" level=info msg="CreateContainer within sandbox \"63e7aedb5481343e24930635c30b37aed60ef91c4461788c321a9b867779b555\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"d57264076b3d81072e57cc47e096c3d3c094d88b1ab4bfe2ccd5d4bf4736a3cd\"" Jan 14 23:45:46.040043 containerd[1695]: time="2026-01-14T23:45:46.038965377Z" level=info msg="StartContainer for \"d57264076b3d81072e57cc47e096c3d3c094d88b1ab4bfe2ccd5d4bf4736a3cd\"" Jan 14 23:45:46.041663 containerd[1695]: time="2026-01-14T23:45:46.041565105Z" level=info msg="connecting to shim d57264076b3d81072e57cc47e096c3d3c094d88b1ab4bfe2ccd5d4bf4736a3cd" address="unix:///run/containerd/s/90a1ee0e11147462d1d9eb2e096e972e658978c10625898413a99a44161de7c3" protocol=ttrpc version=3 Jan 14 23:45:46.073525 systemd[1]: Started cri-containerd-d57264076b3d81072e57cc47e096c3d3c094d88b1ab4bfe2ccd5d4bf4736a3cd.scope - libcontainer container d57264076b3d81072e57cc47e096c3d3c094d88b1ab4bfe2ccd5d4bf4736a3cd. Jan 14 23:45:46.135000 audit: BPF prog-id=166 op=LOAD Jan 14 23:45:46.135000 audit[3637]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3450 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:46.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435373236343037366233643831303732653537636334376530393663 Jan 14 23:45:46.135000 audit: BPF prog-id=167 op=LOAD Jan 14 23:45:46.135000 audit[3637]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3450 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:46.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435373236343037366233643831303732653537636334376530393663 Jan 14 23:45:46.135000 audit: BPF prog-id=167 op=UNLOAD Jan 14 23:45:46.135000 audit[3637]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3450 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:46.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435373236343037366233643831303732653537636334376530393663 Jan 14 23:45:46.135000 audit: BPF prog-id=166 op=UNLOAD Jan 14 23:45:46.135000 audit[3637]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3450 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:46.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435373236343037366233643831303732653537636334376530393663 Jan 14 23:45:46.135000 audit: BPF prog-id=168 op=LOAD Jan 14 23:45:46.135000 audit[3637]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3450 pid=3637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:45:46.135000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6435373236343037366233643831303732653537636334376530393663 Jan 14 23:45:46.154470 containerd[1695]: time="2026-01-14T23:45:46.154432530Z" level=info msg="StartContainer for \"d57264076b3d81072e57cc47e096c3d3c094d88b1ab4bfe2ccd5d4bf4736a3cd\" returns successfully" Jan 14 23:45:46.164842 systemd[1]: cri-containerd-d57264076b3d81072e57cc47e096c3d3c094d88b1ab4bfe2ccd5d4bf4736a3cd.scope: Deactivated successfully. Jan 14 23:45:46.168000 audit: BPF prog-id=168 op=UNLOAD Jan 14 23:45:46.169583 containerd[1695]: time="2026-01-14T23:45:46.169524696Z" level=info msg="received container exit event container_id:\"d57264076b3d81072e57cc47e096c3d3c094d88b1ab4bfe2ccd5d4bf4736a3cd\" id:\"d57264076b3d81072e57cc47e096c3d3c094d88b1ab4bfe2ccd5d4bf4736a3cd\" pid:3650 exited_at:{seconds:1768434346 nanos:167970731}" Jan 14 23:45:46.188415 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d57264076b3d81072e57cc47e096c3d3c094d88b1ab4bfe2ccd5d4bf4736a3cd-rootfs.mount: Deactivated successfully. Jan 14 23:45:47.631779 kubelet[2898]: E0114 23:45:47.631715 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:45:49.631768 kubelet[2898]: E0114 23:45:49.631717 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:45:49.724665 containerd[1695]: time="2026-01-14T23:45:49.724601830Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 14 23:45:51.631717 kubelet[2898]: E0114 23:45:51.631646 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:45:53.631308 kubelet[2898]: E0114 23:45:53.631152 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:45:55.631535 kubelet[2898]: E0114 23:45:55.631467 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:45:57.631512 kubelet[2898]: E0114 23:45:57.631438 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:45:59.631522 kubelet[2898]: E0114 23:45:59.631425 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:46:01.631871 kubelet[2898]: E0114 23:46:01.631796 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:46:03.632144 kubelet[2898]: E0114 23:46:03.631923 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:46:05.631342 kubelet[2898]: E0114 23:46:05.631238 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:46:07.632021 kubelet[2898]: E0114 23:46:07.631930 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:46:09.632129 kubelet[2898]: E0114 23:46:09.632037 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:46:11.117328 kubelet[2898]: I0114 23:46:11.116972 2898 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 23:46:11.144000 audit[3691]: NETFILTER_CFG table=filter:119 family=2 entries=21 op=nft_register_rule pid=3691 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:46:11.149129 kernel: kauditd_printk_skb: 28 callbacks suppressed Jan 14 23:46:11.149198 kernel: audit: type=1325 audit(1768434371.144:567): table=filter:119 family=2 entries=21 op=nft_register_rule pid=3691 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:46:11.144000 audit[3691]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe801a340 a2=0 a3=1 items=0 ppid=3023 pid=3691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:11.153508 kernel: audit: type=1300 audit(1768434371.144:567): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe801a340 a2=0 a3=1 items=0 ppid=3023 pid=3691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:11.144000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:46:11.155624 kernel: audit: type=1327 audit(1768434371.144:567): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:46:11.157000 audit[3691]: NETFILTER_CFG table=nat:120 family=2 entries=19 op=nft_register_chain pid=3691 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:46:11.157000 audit[3691]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffe801a340 a2=0 a3=1 items=0 ppid=3023 pid=3691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:11.164571 kernel: audit: type=1325 audit(1768434371.157:568): table=nat:120 family=2 entries=19 op=nft_register_chain pid=3691 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:46:11.164668 kernel: audit: type=1300 audit(1768434371.157:568): arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffe801a340 a2=0 a3=1 items=0 ppid=3023 pid=3691 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:46:11.164718 kernel: audit: type=1327 audit(1768434371.157:568): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:46:11.157000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:46:11.631995 kubelet[2898]: E0114 23:46:11.631536 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:46:13.631256 kubelet[2898]: E0114 23:46:13.631148 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:46:15.632093 kubelet[2898]: E0114 23:46:15.631991 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:46:17.631969 kubelet[2898]: E0114 23:46:17.631891 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:46:19.631602 kubelet[2898]: E0114 23:46:19.631544 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:46:21.631498 kubelet[2898]: E0114 23:46:21.631420 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:46:23.632063 kubelet[2898]: E0114 23:46:23.631602 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:46:25.631833 kubelet[2898]: E0114 23:46:25.631726 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:46:27.631811 kubelet[2898]: E0114 23:46:27.631427 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:46:29.631585 kubelet[2898]: E0114 23:46:29.631476 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:46:31.631263 kubelet[2898]: E0114 23:46:31.631211 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:46:33.631288 kubelet[2898]: E0114 23:46:33.631155 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:46:35.631083 kubelet[2898]: E0114 23:46:35.631027 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:46:37.631531 kubelet[2898]: E0114 23:46:37.631404 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:46:39.631153 kubelet[2898]: E0114 23:46:39.631063 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:46:41.631491 kubelet[2898]: E0114 23:46:41.631434 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:46:43.631566 kubelet[2898]: E0114 23:46:43.631475 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:46:45.631654 kubelet[2898]: E0114 23:46:45.631409 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:46:47.631598 kubelet[2898]: E0114 23:46:47.631502 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:46:49.631639 kubelet[2898]: E0114 23:46:49.631576 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:46:51.631298 kubelet[2898]: E0114 23:46:51.631168 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:46:53.632048 kubelet[2898]: E0114 23:46:53.631786 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:46:55.631907 kubelet[2898]: E0114 23:46:55.631672 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:46:57.631247 kubelet[2898]: E0114 23:46:57.631201 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:46:59.631504 kubelet[2898]: E0114 23:46:59.631407 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:47:01.630951 kubelet[2898]: E0114 23:47:01.630878 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:47:03.631872 kubelet[2898]: E0114 23:47:03.631777 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:47:05.632260 kubelet[2898]: E0114 23:47:05.632040 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:47:07.631711 kubelet[2898]: E0114 23:47:07.631655 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:47:09.631624 kubelet[2898]: E0114 23:47:09.631542 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:47:11.631181 kubelet[2898]: E0114 23:47:11.631078 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:47:13.631448 kubelet[2898]: E0114 23:47:13.631348 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:47:15.632188 kubelet[2898]: E0114 23:47:15.632086 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:47:17.631604 kubelet[2898]: E0114 23:47:17.631411 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:47:18.682587 kubelet[2898]: E0114 23:47:18.682525 2898 kubelet_node_status.go:460] "Node not becoming ready in time after startup" Jan 14 23:47:18.699998 kubelet[2898]: E0114 23:47:18.699949 2898 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 14 23:47:19.631448 kubelet[2898]: E0114 23:47:19.631395 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:47:21.631431 kubelet[2898]: E0114 23:47:21.631381 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:47:23.631494 kubelet[2898]: E0114 23:47:23.631439 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:47:23.701383 kubelet[2898]: E0114 23:47:23.701346 2898 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 14 23:47:25.631913 kubelet[2898]: E0114 23:47:25.631855 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:47:27.631556 kubelet[2898]: E0114 23:47:27.631493 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:47:28.702401 kubelet[2898]: E0114 23:47:28.702354 2898 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 14 23:47:29.631551 kubelet[2898]: E0114 23:47:29.631452 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:47:31.631620 kubelet[2898]: E0114 23:47:31.631551 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:47:33.631511 kubelet[2898]: E0114 23:47:33.631394 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:47:33.703924 kubelet[2898]: E0114 23:47:33.703886 2898 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 14 23:47:35.632031 kubelet[2898]: E0114 23:47:35.631703 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:47:37.631676 kubelet[2898]: E0114 23:47:37.631603 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:47:38.704859 kubelet[2898]: E0114 23:47:38.704817 2898 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 14 23:47:39.631739 kubelet[2898]: E0114 23:47:39.631686 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:47:41.631603 kubelet[2898]: E0114 23:47:41.631521 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:47:43.632174 kubelet[2898]: E0114 23:47:43.630953 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:47:43.707749 kubelet[2898]: E0114 23:47:43.707713 2898 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 14 23:47:45.632057 kubelet[2898]: E0114 23:47:45.631775 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:47:47.631855 kubelet[2898]: E0114 23:47:47.631760 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:47:48.709514 kubelet[2898]: E0114 23:47:48.709467 2898 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 14 23:47:49.631427 kubelet[2898]: E0114 23:47:49.631368 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:47:51.631583 kubelet[2898]: E0114 23:47:51.631506 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:47:53.631879 kubelet[2898]: E0114 23:47:53.631819 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:47:53.710384 kubelet[2898]: E0114 23:47:53.710340 2898 kubelet.go:3002] "Container runtime network not ready" networkReady="NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" Jan 14 23:47:55.631707 kubelet[2898]: E0114 23:47:55.631655 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:47:56.562821 containerd[1695]: time="2026-01-14T23:47:56.562765296Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:47:56.564149 containerd[1695]: time="2026-01-14T23:47:56.564096580Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Jan 14 23:47:56.565201 containerd[1695]: time="2026-01-14T23:47:56.565150423Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:47:56.567369 containerd[1695]: time="2026-01-14T23:47:56.567328589Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:47:56.568146 containerd[1695]: time="2026-01-14T23:47:56.568097712Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2m6.843445362s" Jan 14 23:47:56.568146 containerd[1695]: time="2026-01-14T23:47:56.568137912Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 14 23:47:56.581500 containerd[1695]: time="2026-01-14T23:47:56.581421593Z" level=info msg="CreateContainer within sandbox \"63e7aedb5481343e24930635c30b37aed60ef91c4461788c321a9b867779b555\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 14 23:47:56.591840 containerd[1695]: time="2026-01-14T23:47:56.590702301Z" level=info msg="Container 0b06150c3af7ccd840c6f32355d89d3efd87818c1158576afc6198180289a443: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:47:56.600721 containerd[1695]: time="2026-01-14T23:47:56.600679011Z" level=info msg="CreateContainer within sandbox \"63e7aedb5481343e24930635c30b37aed60ef91c4461788c321a9b867779b555\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"0b06150c3af7ccd840c6f32355d89d3efd87818c1158576afc6198180289a443\"" Jan 14 23:47:56.601385 containerd[1695]: time="2026-01-14T23:47:56.601363053Z" level=info msg="StartContainer for \"0b06150c3af7ccd840c6f32355d89d3efd87818c1158576afc6198180289a443\"" Jan 14 23:47:56.603573 containerd[1695]: time="2026-01-14T23:47:56.603543580Z" level=info msg="connecting to shim 0b06150c3af7ccd840c6f32355d89d3efd87818c1158576afc6198180289a443" address="unix:///run/containerd/s/90a1ee0e11147462d1d9eb2e096e972e658978c10625898413a99a44161de7c3" protocol=ttrpc version=3 Jan 14 23:47:56.631502 systemd[1]: Started cri-containerd-0b06150c3af7ccd840c6f32355d89d3efd87818c1158576afc6198180289a443.scope - libcontainer container 0b06150c3af7ccd840c6f32355d89d3efd87818c1158576afc6198180289a443. Jan 14 23:47:56.688000 audit: BPF prog-id=169 op=LOAD Jan 14 23:47:56.688000 audit[3714]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3450 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:56.694654 kernel: audit: type=1334 audit(1768434476.688:569): prog-id=169 op=LOAD Jan 14 23:47:56.694722 kernel: audit: type=1300 audit(1768434476.688:569): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3450 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:56.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062303631353063336166376363643834306336663332333535643839 Jan 14 23:47:56.698446 kernel: audit: type=1327 audit(1768434476.688:569): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062303631353063336166376363643834306336663332333535643839 Jan 14 23:47:56.698556 kernel: audit: type=1334 audit(1768434476.688:570): prog-id=170 op=LOAD Jan 14 23:47:56.688000 audit: BPF prog-id=170 op=LOAD Jan 14 23:47:56.688000 audit[3714]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3450 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:56.702745 kernel: audit: type=1300 audit(1768434476.688:570): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3450 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:56.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062303631353063336166376363643834306336663332333535643839 Jan 14 23:47:56.705975 kernel: audit: type=1327 audit(1768434476.688:570): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062303631353063336166376363643834306336663332333535643839 Jan 14 23:47:56.706079 kernel: audit: type=1334 audit(1768434476.688:571): prog-id=170 op=UNLOAD Jan 14 23:47:56.688000 audit: BPF prog-id=170 op=UNLOAD Jan 14 23:47:56.688000 audit[3714]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3450 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:56.709735 kernel: audit: type=1300 audit(1768434476.688:571): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3450 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:56.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062303631353063336166376363643834306336663332333535643839 Jan 14 23:47:56.712916 kernel: audit: type=1327 audit(1768434476.688:571): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062303631353063336166376363643834306336663332333535643839 Jan 14 23:47:56.688000 audit: BPF prog-id=169 op=UNLOAD Jan 14 23:47:56.713918 kernel: audit: type=1334 audit(1768434476.688:572): prog-id=169 op=UNLOAD Jan 14 23:47:56.688000 audit[3714]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3450 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:56.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062303631353063336166376363643834306336663332333535643839 Jan 14 23:47:56.688000 audit: BPF prog-id=171 op=LOAD Jan 14 23:47:56.688000 audit[3714]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3450 pid=3714 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:47:56.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3062303631353063336166376363643834306336663332333535643839 Jan 14 23:47:56.725486 containerd[1695]: time="2026-01-14T23:47:56.725388432Z" level=info msg="StartContainer for \"0b06150c3af7ccd840c6f32355d89d3efd87818c1158576afc6198180289a443\" returns successfully" Jan 14 23:47:57.631358 kubelet[2898]: E0114 23:47:57.631240 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:47:57.982639 containerd[1695]: time="2026-01-14T23:47:57.982521393Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 14 23:47:57.984450 systemd[1]: cri-containerd-0b06150c3af7ccd840c6f32355d89d3efd87818c1158576afc6198180289a443.scope: Deactivated successfully. Jan 14 23:47:57.984774 systemd[1]: cri-containerd-0b06150c3af7ccd840c6f32355d89d3efd87818c1158576afc6198180289a443.scope: Consumed 454ms CPU time, 185.4M memory peak, 165.9M written to disk. Jan 14 23:47:57.987114 containerd[1695]: time="2026-01-14T23:47:57.987082286Z" level=info msg="received container exit event container_id:\"0b06150c3af7ccd840c6f32355d89d3efd87818c1158576afc6198180289a443\" id:\"0b06150c3af7ccd840c6f32355d89d3efd87818c1158576afc6198180289a443\" pid:3727 exited_at:{seconds:1768434477 nanos:986772286}" Jan 14 23:47:57.990000 audit: BPF prog-id=171 op=UNLOAD Jan 14 23:47:58.006896 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0b06150c3af7ccd840c6f32355d89d3efd87818c1158576afc6198180289a443-rootfs.mount: Deactivated successfully. Jan 14 23:47:59.636481 systemd[1]: Created slice kubepods-besteffort-pod5c454d6a_8fe3_46dd_a39b_d216b7be481d.slice - libcontainer container kubepods-besteffort-pod5c454d6a_8fe3_46dd_a39b_d216b7be481d.slice. Jan 14 23:47:59.638931 containerd[1695]: time="2026-01-14T23:47:59.638871172Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2lqxs,Uid:5c454d6a-8fe3-46dd-a39b-d216b7be481d,Namespace:calico-system,Attempt:0,}" Jan 14 23:47:59.710164 containerd[1695]: time="2026-01-14T23:47:59.710106350Z" level=error msg="Failed to destroy network for sandbox \"608215a6583cffbb40c597fd1bd19a22d3f7e87e6a5a6d48738605df161e51c0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:47:59.711789 systemd[1]: run-netns-cni\x2d8aeb0819\x2d7a7b\x2d8a00\x2d6a3c\x2d85cd098fa3f4.mount: Deactivated successfully. Jan 14 23:47:59.715413 containerd[1695]: time="2026-01-14T23:47:59.715371966Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2lqxs,Uid:5c454d6a-8fe3-46dd-a39b-d216b7be481d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"608215a6583cffbb40c597fd1bd19a22d3f7e87e6a5a6d48738605df161e51c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:47:59.715837 kubelet[2898]: E0114 23:47:59.715765 2898 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"608215a6583cffbb40c597fd1bd19a22d3f7e87e6a5a6d48738605df161e51c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:47:59.716108 kubelet[2898]: E0114 23:47:59.715874 2898 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"608215a6583cffbb40c597fd1bd19a22d3f7e87e6a5a6d48738605df161e51c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2lqxs" Jan 14 23:47:59.716108 kubelet[2898]: E0114 23:47:59.715895 2898 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"608215a6583cffbb40c597fd1bd19a22d3f7e87e6a5a6d48738605df161e51c0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-2lqxs" Jan 14 23:47:59.716108 kubelet[2898]: E0114 23:47:59.715948 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-2lqxs_calico-system(5c454d6a-8fe3-46dd-a39b-d216b7be481d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-2lqxs_calico-system(5c454d6a-8fe3-46dd-a39b-d216b7be481d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"608215a6583cffbb40c597fd1bd19a22d3f7e87e6a5a6d48738605df161e51c0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:47:59.961527 containerd[1695]: time="2026-01-14T23:47:59.961406038Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 14 23:48:02.740823 systemd[1]: Created slice kubepods-burstable-pod86940f44_aeec_4f6a_958e_6dee8b716868.slice - libcontainer container kubepods-burstable-pod86940f44_aeec_4f6a_958e_6dee8b716868.slice. Jan 14 23:48:02.750875 systemd[1]: Created slice kubepods-burstable-podf47a1c23_1d14_45a5_9fef_8bb462878104.slice - libcontainer container kubepods-burstable-podf47a1c23_1d14_45a5_9fef_8bb462878104.slice. Jan 14 23:48:02.760799 systemd[1]: Created slice kubepods-besteffort-podfcec49c5_6358_46d9_9922_8a81fb4bafd8.slice - libcontainer container kubepods-besteffort-podfcec49c5_6358_46d9_9922_8a81fb4bafd8.slice. Jan 14 23:48:02.768722 systemd[1]: Created slice kubepods-besteffort-pod3c2935dc_ad54_4dd0_bfa3_577b2efcfa67.slice - libcontainer container kubepods-besteffort-pod3c2935dc_ad54_4dd0_bfa3_577b2efcfa67.slice. Jan 14 23:48:02.770558 kubelet[2898]: I0114 23:48:02.770393 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/300b5f0b-ed7c-4a04-a4b8-68a71ea25297-calico-apiserver-certs\") pod \"calico-apiserver-5b767987c5-2glxx\" (UID: \"300b5f0b-ed7c-4a04-a4b8-68a71ea25297\") " pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" Jan 14 23:48:02.770558 kubelet[2898]: I0114 23:48:02.770441 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gb4q4\" (UniqueName: \"kubernetes.io/projected/2d307ca4-cd62-4987-b2dc-ed6b76a2794e-kube-api-access-gb4q4\") pod \"calico-kube-controllers-7cd9b5689c-544p6\" (UID: \"2d307ca4-cd62-4987-b2dc-ed6b76a2794e\") " pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" Jan 14 23:48:02.770558 kubelet[2898]: I0114 23:48:02.770460 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7wqwf\" (UniqueName: \"kubernetes.io/projected/f47a1c23-1d14-45a5-9fef-8bb462878104-kube-api-access-7wqwf\") pod \"coredns-668d6bf9bc-t9xqm\" (UID: \"f47a1c23-1d14-45a5-9fef-8bb462878104\") " pod="kube-system/coredns-668d6bf9bc-t9xqm" Jan 14 23:48:02.770558 kubelet[2898]: I0114 23:48:02.770476 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s9rtk\" (UniqueName: \"kubernetes.io/projected/86940f44-aeec-4f6a-958e-6dee8b716868-kube-api-access-s9rtk\") pod \"coredns-668d6bf9bc-x227k\" (UID: \"86940f44-aeec-4f6a-958e-6dee8b716868\") " pod="kube-system/coredns-668d6bf9bc-x227k" Jan 14 23:48:02.770558 kubelet[2898]: I0114 23:48:02.770493 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/2d307ca4-cd62-4987-b2dc-ed6b76a2794e-tigera-ca-bundle\") pod \"calico-kube-controllers-7cd9b5689c-544p6\" (UID: \"2d307ca4-cd62-4987-b2dc-ed6b76a2794e\") " pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" Jan 14 23:48:02.771097 kubelet[2898]: I0114 23:48:02.770508 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f47a1c23-1d14-45a5-9fef-8bb462878104-config-volume\") pod \"coredns-668d6bf9bc-t9xqm\" (UID: \"f47a1c23-1d14-45a5-9fef-8bb462878104\") " pod="kube-system/coredns-668d6bf9bc-t9xqm" Jan 14 23:48:02.771097 kubelet[2898]: I0114 23:48:02.770525 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tfj59\" (UniqueName: \"kubernetes.io/projected/5eca9ff5-ed57-4795-b82c-c2e2b81c8474-kube-api-access-tfj59\") pod \"calico-apiserver-5b767987c5-49kdx\" (UID: \"5eca9ff5-ed57-4795-b82c-c2e2b81c8474\") " pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" Jan 14 23:48:02.771097 kubelet[2898]: I0114 23:48:02.770542 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-h289k\" (UniqueName: \"kubernetes.io/projected/fcec49c5-6358-46d9-9922-8a81fb4bafd8-kube-api-access-h289k\") pod \"goldmane-666569f655-5sxpk\" (UID: \"fcec49c5-6358-46d9-9922-8a81fb4bafd8\") " pod="calico-system/goldmane-666569f655-5sxpk" Jan 14 23:48:02.771097 kubelet[2898]: I0114 23:48:02.770558 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3c2935dc-ad54-4dd0-bfa3-577b2efcfa67-whisker-backend-key-pair\") pod \"whisker-6c5899f84b-pb9l8\" (UID: \"3c2935dc-ad54-4dd0-bfa3-577b2efcfa67\") " pod="calico-system/whisker-6c5899f84b-pb9l8" Jan 14 23:48:02.771513 kubelet[2898]: I0114 23:48:02.771375 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c2935dc-ad54-4dd0-bfa3-577b2efcfa67-whisker-ca-bundle\") pod \"whisker-6c5899f84b-pb9l8\" (UID: \"3c2935dc-ad54-4dd0-bfa3-577b2efcfa67\") " pod="calico-system/whisker-6c5899f84b-pb9l8" Jan 14 23:48:02.771513 kubelet[2898]: I0114 23:48:02.771405 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/5eca9ff5-ed57-4795-b82c-c2e2b81c8474-calico-apiserver-certs\") pod \"calico-apiserver-5b767987c5-49kdx\" (UID: \"5eca9ff5-ed57-4795-b82c-c2e2b81c8474\") " pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" Jan 14 23:48:02.771513 kubelet[2898]: I0114 23:48:02.771447 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m5dng\" (UniqueName: \"kubernetes.io/projected/300b5f0b-ed7c-4a04-a4b8-68a71ea25297-kube-api-access-m5dng\") pod \"calico-apiserver-5b767987c5-2glxx\" (UID: \"300b5f0b-ed7c-4a04-a4b8-68a71ea25297\") " pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" Jan 14 23:48:02.771513 kubelet[2898]: I0114 23:48:02.771465 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fgb4f\" (UniqueName: \"kubernetes.io/projected/3c2935dc-ad54-4dd0-bfa3-577b2efcfa67-kube-api-access-fgb4f\") pod \"whisker-6c5899f84b-pb9l8\" (UID: \"3c2935dc-ad54-4dd0-bfa3-577b2efcfa67\") " pod="calico-system/whisker-6c5899f84b-pb9l8" Jan 14 23:48:02.771513 kubelet[2898]: I0114 23:48:02.771486 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/fcec49c5-6358-46d9-9922-8a81fb4bafd8-goldmane-key-pair\") pod \"goldmane-666569f655-5sxpk\" (UID: \"fcec49c5-6358-46d9-9922-8a81fb4bafd8\") " pod="calico-system/goldmane-666569f655-5sxpk" Jan 14 23:48:02.771835 kubelet[2898]: I0114 23:48:02.771528 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fcec49c5-6358-46d9-9922-8a81fb4bafd8-goldmane-ca-bundle\") pod \"goldmane-666569f655-5sxpk\" (UID: \"fcec49c5-6358-46d9-9922-8a81fb4bafd8\") " pod="calico-system/goldmane-666569f655-5sxpk" Jan 14 23:48:02.771835 kubelet[2898]: I0114 23:48:02.771548 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/86940f44-aeec-4f6a-958e-6dee8b716868-config-volume\") pod \"coredns-668d6bf9bc-x227k\" (UID: \"86940f44-aeec-4f6a-958e-6dee8b716868\") " pod="kube-system/coredns-668d6bf9bc-x227k" Jan 14 23:48:02.771835 kubelet[2898]: I0114 23:48:02.771581 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fcec49c5-6358-46d9-9922-8a81fb4bafd8-config\") pod \"goldmane-666569f655-5sxpk\" (UID: \"fcec49c5-6358-46d9-9922-8a81fb4bafd8\") " pod="calico-system/goldmane-666569f655-5sxpk" Jan 14 23:48:02.778875 systemd[1]: Created slice kubepods-besteffort-pod300b5f0b_ed7c_4a04_a4b8_68a71ea25297.slice - libcontainer container kubepods-besteffort-pod300b5f0b_ed7c_4a04_a4b8_68a71ea25297.slice. Jan 14 23:48:02.788041 systemd[1]: Created slice kubepods-besteffort-pod5eca9ff5_ed57_4795_b82c_c2e2b81c8474.slice - libcontainer container kubepods-besteffort-pod5eca9ff5_ed57_4795_b82c_c2e2b81c8474.slice. Jan 14 23:48:02.794730 systemd[1]: Created slice kubepods-besteffort-pod2d307ca4_cd62_4987_b2dc_ed6b76a2794e.slice - libcontainer container kubepods-besteffort-pod2d307ca4_cd62_4987_b2dc_ed6b76a2794e.slice. Jan 14 23:48:03.046434 containerd[1695]: time="2026-01-14T23:48:03.046389902Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x227k,Uid:86940f44-aeec-4f6a-958e-6dee8b716868,Namespace:kube-system,Attempt:0,}" Jan 14 23:48:03.057294 containerd[1695]: time="2026-01-14T23:48:03.056637213Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t9xqm,Uid:f47a1c23-1d14-45a5-9fef-8bb462878104,Namespace:kube-system,Attempt:0,}" Jan 14 23:48:03.069911 containerd[1695]: time="2026-01-14T23:48:03.069778373Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5sxpk,Uid:fcec49c5-6358-46d9-9922-8a81fb4bafd8,Namespace:calico-system,Attempt:0,}" Jan 14 23:48:03.074573 containerd[1695]: time="2026-01-14T23:48:03.074518388Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c5899f84b-pb9l8,Uid:3c2935dc-ad54-4dd0-bfa3-577b2efcfa67,Namespace:calico-system,Attempt:0,}" Jan 14 23:48:03.085949 containerd[1695]: time="2026-01-14T23:48:03.085436381Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b767987c5-2glxx,Uid:300b5f0b-ed7c-4a04-a4b8-68a71ea25297,Namespace:calico-apiserver,Attempt:0,}" Jan 14 23:48:03.093574 containerd[1695]: time="2026-01-14T23:48:03.093537046Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b767987c5-49kdx,Uid:5eca9ff5-ed57-4795-b82c-c2e2b81c8474,Namespace:calico-apiserver,Attempt:0,}" Jan 14 23:48:03.100570 containerd[1695]: time="2026-01-14T23:48:03.100523827Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cd9b5689c-544p6,Uid:2d307ca4-cd62-4987-b2dc-ed6b76a2794e,Namespace:calico-system,Attempt:0,}" Jan 14 23:48:03.125513 containerd[1695]: time="2026-01-14T23:48:03.125457663Z" level=error msg="Failed to destroy network for sandbox \"fe358daefb075a3fde8a8354862ea19b98fe19add6ac80f42595bf6f84d69d92\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:48:03.127865 containerd[1695]: time="2026-01-14T23:48:03.127814710Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x227k,Uid:86940f44-aeec-4f6a-958e-6dee8b716868,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe358daefb075a3fde8a8354862ea19b98fe19add6ac80f42595bf6f84d69d92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:48:03.128146 kubelet[2898]: E0114 23:48:03.128112 2898 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe358daefb075a3fde8a8354862ea19b98fe19add6ac80f42595bf6f84d69d92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:48:03.128291 kubelet[2898]: E0114 23:48:03.128261 2898 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe358daefb075a3fde8a8354862ea19b98fe19add6ac80f42595bf6f84d69d92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-x227k" Jan 14 23:48:03.128710 kubelet[2898]: E0114 23:48:03.128352 2898 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"fe358daefb075a3fde8a8354862ea19b98fe19add6ac80f42595bf6f84d69d92\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-x227k" Jan 14 23:48:03.128710 kubelet[2898]: E0114 23:48:03.128407 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-x227k_kube-system(86940f44-aeec-4f6a-958e-6dee8b716868)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-x227k_kube-system(86940f44-aeec-4f6a-958e-6dee8b716868)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"fe358daefb075a3fde8a8354862ea19b98fe19add6ac80f42595bf6f84d69d92\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-x227k" podUID="86940f44-aeec-4f6a-958e-6dee8b716868" Jan 14 23:48:03.136306 containerd[1695]: time="2026-01-14T23:48:03.136235896Z" level=error msg="Failed to destroy network for sandbox \"1f75e8f999503611cb68fdcbd0a2fd9dca38e73ee605e359c134972ee57d0919\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:48:03.139725 containerd[1695]: time="2026-01-14T23:48:03.139652186Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t9xqm,Uid:f47a1c23-1d14-45a5-9fef-8bb462878104,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f75e8f999503611cb68fdcbd0a2fd9dca38e73ee605e359c134972ee57d0919\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:48:03.140293 kubelet[2898]: E0114 23:48:03.140109 2898 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f75e8f999503611cb68fdcbd0a2fd9dca38e73ee605e359c134972ee57d0919\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:48:03.140293 kubelet[2898]: E0114 23:48:03.140167 2898 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f75e8f999503611cb68fdcbd0a2fd9dca38e73ee605e359c134972ee57d0919\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t9xqm" Jan 14 23:48:03.140293 kubelet[2898]: E0114 23:48:03.140189 2898 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1f75e8f999503611cb68fdcbd0a2fd9dca38e73ee605e359c134972ee57d0919\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-t9xqm" Jan 14 23:48:03.140430 kubelet[2898]: E0114 23:48:03.140234 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-t9xqm_kube-system(f47a1c23-1d14-45a5-9fef-8bb462878104)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-t9xqm_kube-system(f47a1c23-1d14-45a5-9fef-8bb462878104)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1f75e8f999503611cb68fdcbd0a2fd9dca38e73ee605e359c134972ee57d0919\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-t9xqm" podUID="f47a1c23-1d14-45a5-9fef-8bb462878104" Jan 14 23:48:03.166551 containerd[1695]: time="2026-01-14T23:48:03.166501549Z" level=error msg="Failed to destroy network for sandbox \"f70a05f0f6d8e6e8c63d8bce6a5ba3d8a4b4f5e83fe4a868c3b35c4fb8c75b06\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:48:03.170293 containerd[1695]: time="2026-01-14T23:48:03.170014199Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5sxpk,Uid:fcec49c5-6358-46d9-9922-8a81fb4bafd8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f70a05f0f6d8e6e8c63d8bce6a5ba3d8a4b4f5e83fe4a868c3b35c4fb8c75b06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:48:03.170569 kubelet[2898]: E0114 23:48:03.170495 2898 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f70a05f0f6d8e6e8c63d8bce6a5ba3d8a4b4f5e83fe4a868c3b35c4fb8c75b06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:48:03.170624 kubelet[2898]: E0114 23:48:03.170571 2898 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f70a05f0f6d8e6e8c63d8bce6a5ba3d8a4b4f5e83fe4a868c3b35c4fb8c75b06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5sxpk" Jan 14 23:48:03.170624 kubelet[2898]: E0114 23:48:03.170592 2898 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f70a05f0f6d8e6e8c63d8bce6a5ba3d8a4b4f5e83fe4a868c3b35c4fb8c75b06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5sxpk" Jan 14 23:48:03.170676 kubelet[2898]: E0114 23:48:03.170629 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-5sxpk_calico-system(fcec49c5-6358-46d9-9922-8a81fb4bafd8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-5sxpk_calico-system(fcec49c5-6358-46d9-9922-8a81fb4bafd8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f70a05f0f6d8e6e8c63d8bce6a5ba3d8a4b4f5e83fe4a868c3b35c4fb8c75b06\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:48:03.190478 containerd[1695]: time="2026-01-14T23:48:03.190429182Z" level=error msg="Failed to destroy network for sandbox \"cab07958a479e05d35711ab21768659fc8f7010c605a0cc6903eab60ab6613ec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:48:03.192493 containerd[1695]: time="2026-01-14T23:48:03.192414068Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b767987c5-49kdx,Uid:5eca9ff5-ed57-4795-b82c-c2e2b81c8474,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"cab07958a479e05d35711ab21768659fc8f7010c605a0cc6903eab60ab6613ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:48:03.193501 containerd[1695]: time="2026-01-14T23:48:03.192653068Z" level=error msg="Failed to destroy network for sandbox \"4358617405d55cea633191caf844d7838277a31020903dd6e004cb5904db975a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:48:03.194683 kubelet[2898]: E0114 23:48:03.193590 2898 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cab07958a479e05d35711ab21768659fc8f7010c605a0cc6903eab60ab6613ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:48:03.194683 kubelet[2898]: E0114 23:48:03.193658 2898 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cab07958a479e05d35711ab21768659fc8f7010c605a0cc6903eab60ab6613ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" Jan 14 23:48:03.194683 kubelet[2898]: E0114 23:48:03.193680 2898 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"cab07958a479e05d35711ab21768659fc8f7010c605a0cc6903eab60ab6613ec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" Jan 14 23:48:03.195287 kubelet[2898]: E0114 23:48:03.193727 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b767987c5-49kdx_calico-apiserver(5eca9ff5-ed57-4795-b82c-c2e2b81c8474)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b767987c5-49kdx_calico-apiserver(5eca9ff5-ed57-4795-b82c-c2e2b81c8474)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"cab07958a479e05d35711ab21768659fc8f7010c605a0cc6903eab60ab6613ec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:48:03.195890 containerd[1695]: time="2026-01-14T23:48:03.195829158Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b767987c5-2glxx,Uid:300b5f0b-ed7c-4a04-a4b8-68a71ea25297,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"4358617405d55cea633191caf844d7838277a31020903dd6e004cb5904db975a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:48:03.196144 kubelet[2898]: E0114 23:48:03.196028 2898 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4358617405d55cea633191caf844d7838277a31020903dd6e004cb5904db975a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:48:03.196144 kubelet[2898]: E0114 23:48:03.196087 2898 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4358617405d55cea633191caf844d7838277a31020903dd6e004cb5904db975a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" Jan 14 23:48:03.196144 kubelet[2898]: E0114 23:48:03.196102 2898 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"4358617405d55cea633191caf844d7838277a31020903dd6e004cb5904db975a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" Jan 14 23:48:03.196340 kubelet[2898]: E0114 23:48:03.196140 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-5b767987c5-2glxx_calico-apiserver(300b5f0b-ed7c-4a04-a4b8-68a71ea25297)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-5b767987c5-2glxx_calico-apiserver(300b5f0b-ed7c-4a04-a4b8-68a71ea25297)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"4358617405d55cea633191caf844d7838277a31020903dd6e004cb5904db975a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:48:03.201282 containerd[1695]: time="2026-01-14T23:48:03.201227775Z" level=error msg="Failed to destroy network for sandbox \"1d40452a8e987f122ebd38c1f49cd99cd75bf84fc29b6bfc22d0df8b90d4fb4f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:48:03.202167 containerd[1695]: time="2026-01-14T23:48:03.202131737Z" level=error msg="Failed to destroy network for sandbox \"62f7f01de95cd170851e1c3f5ef2283ed30c8cb256ad48bf474e861b3797f34a\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:48:03.203820 containerd[1695]: time="2026-01-14T23:48:03.203766702Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6c5899f84b-pb9l8,Uid:3c2935dc-ad54-4dd0-bfa3-577b2efcfa67,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d40452a8e987f122ebd38c1f49cd99cd75bf84fc29b6bfc22d0df8b90d4fb4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:48:03.204164 kubelet[2898]: E0114 23:48:03.203947 2898 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d40452a8e987f122ebd38c1f49cd99cd75bf84fc29b6bfc22d0df8b90d4fb4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:48:03.204164 kubelet[2898]: E0114 23:48:03.204004 2898 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d40452a8e987f122ebd38c1f49cd99cd75bf84fc29b6bfc22d0df8b90d4fb4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c5899f84b-pb9l8" Jan 14 23:48:03.204164 kubelet[2898]: E0114 23:48:03.204023 2898 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d40452a8e987f122ebd38c1f49cd99cd75bf84fc29b6bfc22d0df8b90d4fb4f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6c5899f84b-pb9l8" Jan 14 23:48:03.204424 kubelet[2898]: E0114 23:48:03.204060 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6c5899f84b-pb9l8_calico-system(3c2935dc-ad54-4dd0-bfa3-577b2efcfa67)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6c5899f84b-pb9l8_calico-system(3c2935dc-ad54-4dd0-bfa3-577b2efcfa67)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d40452a8e987f122ebd38c1f49cd99cd75bf84fc29b6bfc22d0df8b90d4fb4f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6c5899f84b-pb9l8" podUID="3c2935dc-ad54-4dd0-bfa3-577b2efcfa67" Jan 14 23:48:03.205185 containerd[1695]: time="2026-01-14T23:48:03.205121426Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cd9b5689c-544p6,Uid:2d307ca4-cd62-4987-b2dc-ed6b76a2794e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"62f7f01de95cd170851e1c3f5ef2283ed30c8cb256ad48bf474e861b3797f34a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:48:03.205741 kubelet[2898]: E0114 23:48:03.205387 2898 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62f7f01de95cd170851e1c3f5ef2283ed30c8cb256ad48bf474e861b3797f34a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 14 23:48:03.205741 kubelet[2898]: E0114 23:48:03.205438 2898 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62f7f01de95cd170851e1c3f5ef2283ed30c8cb256ad48bf474e861b3797f34a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" Jan 14 23:48:03.205741 kubelet[2898]: E0114 23:48:03.205460 2898 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"62f7f01de95cd170851e1c3f5ef2283ed30c8cb256ad48bf474e861b3797f34a\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" Jan 14 23:48:03.205880 kubelet[2898]: E0114 23:48:03.205490 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7cd9b5689c-544p6_calico-system(2d307ca4-cd62-4987-b2dc-ed6b76a2794e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7cd9b5689c-544p6_calico-system(2d307ca4-cd62-4987-b2dc-ed6b76a2794e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"62f7f01de95cd170851e1c3f5ef2283ed30c8cb256ad48bf474e861b3797f34a\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:48:07.325373 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1691960482.mount: Deactivated successfully. Jan 14 23:48:07.348085 containerd[1695]: time="2026-01-14T23:48:07.348030922Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:48:07.351178 containerd[1695]: time="2026-01-14T23:48:07.350842411Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Jan 14 23:48:07.353672 containerd[1695]: time="2026-01-14T23:48:07.353613499Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:48:07.355502 containerd[1695]: time="2026-01-14T23:48:07.355472945Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 14 23:48:07.356291 containerd[1695]: time="2026-01-14T23:48:07.356014027Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 7.394567549s" Jan 14 23:48:07.356291 containerd[1695]: time="2026-01-14T23:48:07.356040027Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 14 23:48:07.364179 containerd[1695]: time="2026-01-14T23:48:07.364132131Z" level=info msg="CreateContainer within sandbox \"63e7aedb5481343e24930635c30b37aed60ef91c4461788c321a9b867779b555\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 14 23:48:07.379647 containerd[1695]: time="2026-01-14T23:48:07.379437018Z" level=info msg="Container 379865f59d1c6d45a47efad7fd278f4278281056549fb1d3476778d9d1f663a8: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:48:07.387693 containerd[1695]: time="2026-01-14T23:48:07.387655923Z" level=info msg="CreateContainer within sandbox \"63e7aedb5481343e24930635c30b37aed60ef91c4461788c321a9b867779b555\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"379865f59d1c6d45a47efad7fd278f4278281056549fb1d3476778d9d1f663a8\"" Jan 14 23:48:07.389313 containerd[1695]: time="2026-01-14T23:48:07.389070568Z" level=info msg="StartContainer for \"379865f59d1c6d45a47efad7fd278f4278281056549fb1d3476778d9d1f663a8\"" Jan 14 23:48:07.391833 containerd[1695]: time="2026-01-14T23:48:07.391807936Z" level=info msg="connecting to shim 379865f59d1c6d45a47efad7fd278f4278281056549fb1d3476778d9d1f663a8" address="unix:///run/containerd/s/90a1ee0e11147462d1d9eb2e096e972e658978c10625898413a99a44161de7c3" protocol=ttrpc version=3 Jan 14 23:48:07.415525 systemd[1]: Started cri-containerd-379865f59d1c6d45a47efad7fd278f4278281056549fb1d3476778d9d1f663a8.scope - libcontainer container 379865f59d1c6d45a47efad7fd278f4278281056549fb1d3476778d9d1f663a8. Jan 14 23:48:07.484000 audit: BPF prog-id=172 op=LOAD Jan 14 23:48:07.486441 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 14 23:48:07.486489 kernel: audit: type=1334 audit(1768434487.484:575): prog-id=172 op=LOAD Jan 14 23:48:07.484000 audit[4033]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3450 pid=4033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:07.490219 kernel: audit: type=1300 audit(1768434487.484:575): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3450 pid=4033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:07.490455 kernel: audit: type=1327 audit(1768434487.484:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337393836356635396431633664343561343765666164376664323738 Jan 14 23:48:07.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337393836356635396431633664343561343765666164376664323738 Jan 14 23:48:07.484000 audit: BPF prog-id=173 op=LOAD Jan 14 23:48:07.484000 audit[4033]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3450 pid=4033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:07.497089 kernel: audit: type=1334 audit(1768434487.484:576): prog-id=173 op=LOAD Jan 14 23:48:07.497152 kernel: audit: type=1300 audit(1768434487.484:576): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3450 pid=4033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:07.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337393836356635396431633664343561343765666164376664323738 Jan 14 23:48:07.500181 kernel: audit: type=1327 audit(1768434487.484:576): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337393836356635396431633664343561343765666164376664323738 Jan 14 23:48:07.500346 kernel: audit: type=1334 audit(1768434487.485:577): prog-id=173 op=UNLOAD Jan 14 23:48:07.485000 audit: BPF prog-id=173 op=UNLOAD Jan 14 23:48:07.485000 audit[4033]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3450 pid=4033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:07.504139 kernel: audit: type=1300 audit(1768434487.485:577): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3450 pid=4033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:07.504194 kernel: audit: type=1327 audit(1768434487.485:577): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337393836356635396431633664343561343765666164376664323738 Jan 14 23:48:07.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337393836356635396431633664343561343765666164376664323738 Jan 14 23:48:07.485000 audit: BPF prog-id=172 op=UNLOAD Jan 14 23:48:07.507877 kernel: audit: type=1334 audit(1768434487.485:578): prog-id=172 op=UNLOAD Jan 14 23:48:07.485000 audit[4033]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3450 pid=4033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:07.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337393836356635396431633664343561343765666164376664323738 Jan 14 23:48:07.485000 audit: BPF prog-id=174 op=LOAD Jan 14 23:48:07.485000 audit[4033]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3450 pid=4033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:07.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3337393836356635396431633664343561343765666164376664323738 Jan 14 23:48:07.525304 containerd[1695]: time="2026-01-14T23:48:07.525182383Z" level=info msg="StartContainer for \"379865f59d1c6d45a47efad7fd278f4278281056549fb1d3476778d9d1f663a8\" returns successfully" Jan 14 23:48:07.655182 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 14 23:48:07.655487 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 14 23:48:07.807896 kubelet[2898]: I0114 23:48:07.807860 2898 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c2935dc-ad54-4dd0-bfa3-577b2efcfa67-whisker-ca-bundle\") pod \"3c2935dc-ad54-4dd0-bfa3-577b2efcfa67\" (UID: \"3c2935dc-ad54-4dd0-bfa3-577b2efcfa67\") " Jan 14 23:48:07.807896 kubelet[2898]: I0114 23:48:07.807907 2898 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fgb4f\" (UniqueName: \"kubernetes.io/projected/3c2935dc-ad54-4dd0-bfa3-577b2efcfa67-kube-api-access-fgb4f\") pod \"3c2935dc-ad54-4dd0-bfa3-577b2efcfa67\" (UID: \"3c2935dc-ad54-4dd0-bfa3-577b2efcfa67\") " Jan 14 23:48:07.808263 kubelet[2898]: I0114 23:48:07.807952 2898 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3c2935dc-ad54-4dd0-bfa3-577b2efcfa67-whisker-backend-key-pair\") pod \"3c2935dc-ad54-4dd0-bfa3-577b2efcfa67\" (UID: \"3c2935dc-ad54-4dd0-bfa3-577b2efcfa67\") " Jan 14 23:48:07.808751 kubelet[2898]: I0114 23:48:07.808654 2898 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3c2935dc-ad54-4dd0-bfa3-577b2efcfa67-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "3c2935dc-ad54-4dd0-bfa3-577b2efcfa67" (UID: "3c2935dc-ad54-4dd0-bfa3-577b2efcfa67"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 14 23:48:07.810404 kubelet[2898]: I0114 23:48:07.810371 2898 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3c2935dc-ad54-4dd0-bfa3-577b2efcfa67-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "3c2935dc-ad54-4dd0-bfa3-577b2efcfa67" (UID: "3c2935dc-ad54-4dd0-bfa3-577b2efcfa67"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 14 23:48:07.810842 kubelet[2898]: I0114 23:48:07.810818 2898 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3c2935dc-ad54-4dd0-bfa3-577b2efcfa67-kube-api-access-fgb4f" (OuterVolumeSpecName: "kube-api-access-fgb4f") pod "3c2935dc-ad54-4dd0-bfa3-577b2efcfa67" (UID: "3c2935dc-ad54-4dd0-bfa3-577b2efcfa67"). InnerVolumeSpecName "kube-api-access-fgb4f". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 14 23:48:07.908207 kubelet[2898]: I0114 23:48:07.908169 2898 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/3c2935dc-ad54-4dd0-bfa3-577b2efcfa67-whisker-ca-bundle\") on node \"ci-4515-1-0-n-1d3be4f164\" DevicePath \"\"" Jan 14 23:48:07.908207 kubelet[2898]: I0114 23:48:07.908205 2898 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fgb4f\" (UniqueName: \"kubernetes.io/projected/3c2935dc-ad54-4dd0-bfa3-577b2efcfa67-kube-api-access-fgb4f\") on node \"ci-4515-1-0-n-1d3be4f164\" DevicePath \"\"" Jan 14 23:48:07.908207 kubelet[2898]: I0114 23:48:07.908216 2898 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/3c2935dc-ad54-4dd0-bfa3-577b2efcfa67-whisker-backend-key-pair\") on node \"ci-4515-1-0-n-1d3be4f164\" DevicePath \"\"" Jan 14 23:48:07.987809 systemd[1]: Removed slice kubepods-besteffort-pod3c2935dc_ad54_4dd0_bfa3_577b2efcfa67.slice - libcontainer container kubepods-besteffort-pod3c2935dc_ad54_4dd0_bfa3_577b2efcfa67.slice. Jan 14 23:48:08.007129 kubelet[2898]: I0114 23:48:08.006458 2898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-pcznb" podStartSLOduration=1.075466845 podStartE2EDuration="2m26.006437173s" podCreationTimestamp="2026-01-14 23:45:42 +0000 UTC" firstStartedPulling="2026-01-14 23:45:42.425993101 +0000 UTC m=+23.875531655" lastFinishedPulling="2026-01-14 23:48:07.356963469 +0000 UTC m=+168.806501983" observedRunningTime="2026-01-14 23:48:08.006089052 +0000 UTC m=+169.455627566" watchObservedRunningTime="2026-01-14 23:48:08.006437173 +0000 UTC m=+169.455975727" Jan 14 23:48:08.070634 systemd[1]: Created slice kubepods-besteffort-pod63f0b6ec_9977_4e0c_b6a6_80408e82ee47.slice - libcontainer container kubepods-besteffort-pod63f0b6ec_9977_4e0c_b6a6_80408e82ee47.slice. Jan 14 23:48:08.110095 kubelet[2898]: I0114 23:48:08.109458 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/63f0b6ec-9977-4e0c-b6a6-80408e82ee47-whisker-ca-bundle\") pod \"whisker-7f94899ccb-pnwbr\" (UID: \"63f0b6ec-9977-4e0c-b6a6-80408e82ee47\") " pod="calico-system/whisker-7f94899ccb-pnwbr" Jan 14 23:48:08.110395 kubelet[2898]: I0114 23:48:08.110327 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-x7cql\" (UniqueName: \"kubernetes.io/projected/63f0b6ec-9977-4e0c-b6a6-80408e82ee47-kube-api-access-x7cql\") pod \"whisker-7f94899ccb-pnwbr\" (UID: \"63f0b6ec-9977-4e0c-b6a6-80408e82ee47\") " pod="calico-system/whisker-7f94899ccb-pnwbr" Jan 14 23:48:08.110535 kubelet[2898]: I0114 23:48:08.110468 2898 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/63f0b6ec-9977-4e0c-b6a6-80408e82ee47-whisker-backend-key-pair\") pod \"whisker-7f94899ccb-pnwbr\" (UID: \"63f0b6ec-9977-4e0c-b6a6-80408e82ee47\") " pod="calico-system/whisker-7f94899ccb-pnwbr" Jan 14 23:48:08.328079 systemd[1]: var-lib-kubelet-pods-3c2935dc\x2dad54\x2d4dd0\x2dbfa3\x2d577b2efcfa67-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfgb4f.mount: Deactivated successfully. Jan 14 23:48:08.328162 systemd[1]: var-lib-kubelet-pods-3c2935dc\x2dad54\x2d4dd0\x2dbfa3\x2d577b2efcfa67-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 14 23:48:08.377498 containerd[1695]: time="2026-01-14T23:48:08.377443947Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f94899ccb-pnwbr,Uid:63f0b6ec-9977-4e0c-b6a6-80408e82ee47,Namespace:calico-system,Attempt:0,}" Jan 14 23:48:08.510112 systemd-networkd[1602]: cali1722fd7de71: Link UP Jan 14 23:48:08.510259 systemd-networkd[1602]: cali1722fd7de71: Gained carrier Jan 14 23:48:08.522935 containerd[1695]: 2026-01-14 23:48:08.397 [INFO][4126] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 14 23:48:08.522935 containerd[1695]: 2026-01-14 23:48:08.415 [INFO][4126] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--n--1d3be4f164-k8s-whisker--7f94899ccb--pnwbr-eth0 whisker-7f94899ccb- calico-system 63f0b6ec-9977-4e0c-b6a6-80408e82ee47 1128 0 2026-01-14 23:48:08 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:7f94899ccb projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4515-1-0-n-1d3be4f164 whisker-7f94899ccb-pnwbr eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali1722fd7de71 [] [] }} ContainerID="1ffaddb42e0f0a556bfc5d9b8c82b1df512287da4c9686ebcf850ac0e1bef6bb" Namespace="calico-system" Pod="whisker-7f94899ccb-pnwbr" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-whisker--7f94899ccb--pnwbr-" Jan 14 23:48:08.522935 containerd[1695]: 2026-01-14 23:48:08.415 [INFO][4126] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1ffaddb42e0f0a556bfc5d9b8c82b1df512287da4c9686ebcf850ac0e1bef6bb" Namespace="calico-system" Pod="whisker-7f94899ccb-pnwbr" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-whisker--7f94899ccb--pnwbr-eth0" Jan 14 23:48:08.522935 containerd[1695]: 2026-01-14 23:48:08.462 [INFO][4141] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1ffaddb42e0f0a556bfc5d9b8c82b1df512287da4c9686ebcf850ac0e1bef6bb" HandleID="k8s-pod-network.1ffaddb42e0f0a556bfc5d9b8c82b1df512287da4c9686ebcf850ac0e1bef6bb" Workload="ci--4515--1--0--n--1d3be4f164-k8s-whisker--7f94899ccb--pnwbr-eth0" Jan 14 23:48:08.523145 containerd[1695]: 2026-01-14 23:48:08.462 [INFO][4141] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1ffaddb42e0f0a556bfc5d9b8c82b1df512287da4c9686ebcf850ac0e1bef6bb" HandleID="k8s-pod-network.1ffaddb42e0f0a556bfc5d9b8c82b1df512287da4c9686ebcf850ac0e1bef6bb" Workload="ci--4515--1--0--n--1d3be4f164-k8s-whisker--7f94899ccb--pnwbr-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000322b50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515-1-0-n-1d3be4f164", "pod":"whisker-7f94899ccb-pnwbr", "timestamp":"2026-01-14 23:48:08.462081165 +0000 UTC"}, Hostname:"ci-4515-1-0-n-1d3be4f164", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 23:48:08.523145 containerd[1695]: 2026-01-14 23:48:08.462 [INFO][4141] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 23:48:08.523145 containerd[1695]: 2026-01-14 23:48:08.462 [INFO][4141] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 23:48:08.523145 containerd[1695]: 2026-01-14 23:48:08.462 [INFO][4141] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-n-1d3be4f164' Jan 14 23:48:08.523145 containerd[1695]: 2026-01-14 23:48:08.472 [INFO][4141] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1ffaddb42e0f0a556bfc5d9b8c82b1df512287da4c9686ebcf850ac0e1bef6bb" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:08.523145 containerd[1695]: 2026-01-14 23:48:08.478 [INFO][4141] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:08.523145 containerd[1695]: 2026-01-14 23:48:08.483 [INFO][4141] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:08.523145 containerd[1695]: 2026-01-14 23:48:08.485 [INFO][4141] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:08.523145 containerd[1695]: 2026-01-14 23:48:08.487 [INFO][4141] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:08.523348 containerd[1695]: 2026-01-14 23:48:08.487 [INFO][4141] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.1ffaddb42e0f0a556bfc5d9b8c82b1df512287da4c9686ebcf850ac0e1bef6bb" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:08.523348 containerd[1695]: 2026-01-14 23:48:08.489 [INFO][4141] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1ffaddb42e0f0a556bfc5d9b8c82b1df512287da4c9686ebcf850ac0e1bef6bb Jan 14 23:48:08.523348 containerd[1695]: 2026-01-14 23:48:08.493 [INFO][4141] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.1ffaddb42e0f0a556bfc5d9b8c82b1df512287da4c9686ebcf850ac0e1bef6bb" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:08.523348 containerd[1695]: 2026-01-14 23:48:08.499 [INFO][4141] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.129/26] block=192.168.21.128/26 handle="k8s-pod-network.1ffaddb42e0f0a556bfc5d9b8c82b1df512287da4c9686ebcf850ac0e1bef6bb" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:08.523348 containerd[1695]: 2026-01-14 23:48:08.499 [INFO][4141] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.129/26] handle="k8s-pod-network.1ffaddb42e0f0a556bfc5d9b8c82b1df512287da4c9686ebcf850ac0e1bef6bb" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:08.523348 containerd[1695]: 2026-01-14 23:48:08.499 [INFO][4141] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 23:48:08.523348 containerd[1695]: 2026-01-14 23:48:08.499 [INFO][4141] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.129/26] IPv6=[] ContainerID="1ffaddb42e0f0a556bfc5d9b8c82b1df512287da4c9686ebcf850ac0e1bef6bb" HandleID="k8s-pod-network.1ffaddb42e0f0a556bfc5d9b8c82b1df512287da4c9686ebcf850ac0e1bef6bb" Workload="ci--4515--1--0--n--1d3be4f164-k8s-whisker--7f94899ccb--pnwbr-eth0" Jan 14 23:48:08.523489 containerd[1695]: 2026-01-14 23:48:08.502 [INFO][4126] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1ffaddb42e0f0a556bfc5d9b8c82b1df512287da4c9686ebcf850ac0e1bef6bb" Namespace="calico-system" Pod="whisker-7f94899ccb-pnwbr" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-whisker--7f94899ccb--pnwbr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--1d3be4f164-k8s-whisker--7f94899ccb--pnwbr-eth0", GenerateName:"whisker-7f94899ccb-", Namespace:"calico-system", SelfLink:"", UID:"63f0b6ec-9977-4e0c-b6a6-80408e82ee47", ResourceVersion:"1128", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 48, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f94899ccb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-1d3be4f164", ContainerID:"", Pod:"whisker-7f94899ccb-pnwbr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.21.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1722fd7de71", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:48:08.523489 containerd[1695]: 2026-01-14 23:48:08.502 [INFO][4126] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.129/32] ContainerID="1ffaddb42e0f0a556bfc5d9b8c82b1df512287da4c9686ebcf850ac0e1bef6bb" Namespace="calico-system" Pod="whisker-7f94899ccb-pnwbr" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-whisker--7f94899ccb--pnwbr-eth0" Jan 14 23:48:08.523560 containerd[1695]: 2026-01-14 23:48:08.502 [INFO][4126] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1722fd7de71 ContainerID="1ffaddb42e0f0a556bfc5d9b8c82b1df512287da4c9686ebcf850ac0e1bef6bb" Namespace="calico-system" Pod="whisker-7f94899ccb-pnwbr" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-whisker--7f94899ccb--pnwbr-eth0" Jan 14 23:48:08.523560 containerd[1695]: 2026-01-14 23:48:08.509 [INFO][4126] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1ffaddb42e0f0a556bfc5d9b8c82b1df512287da4c9686ebcf850ac0e1bef6bb" Namespace="calico-system" Pod="whisker-7f94899ccb-pnwbr" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-whisker--7f94899ccb--pnwbr-eth0" Jan 14 23:48:08.523599 containerd[1695]: 2026-01-14 23:48:08.509 [INFO][4126] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1ffaddb42e0f0a556bfc5d9b8c82b1df512287da4c9686ebcf850ac0e1bef6bb" Namespace="calico-system" Pod="whisker-7f94899ccb-pnwbr" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-whisker--7f94899ccb--pnwbr-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--1d3be4f164-k8s-whisker--7f94899ccb--pnwbr-eth0", GenerateName:"whisker-7f94899ccb-", Namespace:"calico-system", SelfLink:"", UID:"63f0b6ec-9977-4e0c-b6a6-80408e82ee47", ResourceVersion:"1128", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 48, 8, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"7f94899ccb", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-1d3be4f164", ContainerID:"1ffaddb42e0f0a556bfc5d9b8c82b1df512287da4c9686ebcf850ac0e1bef6bb", Pod:"whisker-7f94899ccb-pnwbr", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.21.129/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali1722fd7de71", MAC:"7a:7f:59:67:5c:92", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:48:08.523677 containerd[1695]: 2026-01-14 23:48:08.520 [INFO][4126] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1ffaddb42e0f0a556bfc5d9b8c82b1df512287da4c9686ebcf850ac0e1bef6bb" Namespace="calico-system" Pod="whisker-7f94899ccb-pnwbr" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-whisker--7f94899ccb--pnwbr-eth0" Jan 14 23:48:08.541046 containerd[1695]: time="2026-01-14T23:48:08.540950446Z" level=info msg="connecting to shim 1ffaddb42e0f0a556bfc5d9b8c82b1df512287da4c9686ebcf850ac0e1bef6bb" address="unix:///run/containerd/s/78d6ad00c1f42894744b4ba3838c70a4259b2d8637f141d536c2b091d8a60b64" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:48:08.570631 systemd[1]: Started cri-containerd-1ffaddb42e0f0a556bfc5d9b8c82b1df512287da4c9686ebcf850ac0e1bef6bb.scope - libcontainer container 1ffaddb42e0f0a556bfc5d9b8c82b1df512287da4c9686ebcf850ac0e1bef6bb. Jan 14 23:48:08.579000 audit: BPF prog-id=175 op=LOAD Jan 14 23:48:08.580000 audit: BPF prog-id=176 op=LOAD Jan 14 23:48:08.580000 audit[4176]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4165 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:08.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166666164646234326530663061353536626663356439623863383262 Jan 14 23:48:08.580000 audit: BPF prog-id=176 op=UNLOAD Jan 14 23:48:08.580000 audit[4176]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4165 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:08.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166666164646234326530663061353536626663356439623863383262 Jan 14 23:48:08.580000 audit: BPF prog-id=177 op=LOAD Jan 14 23:48:08.580000 audit[4176]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4165 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:08.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166666164646234326530663061353536626663356439623863383262 Jan 14 23:48:08.580000 audit: BPF prog-id=178 op=LOAD Jan 14 23:48:08.580000 audit[4176]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4165 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:08.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166666164646234326530663061353536626663356439623863383262 Jan 14 23:48:08.580000 audit: BPF prog-id=178 op=UNLOAD Jan 14 23:48:08.580000 audit[4176]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4165 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:08.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166666164646234326530663061353536626663356439623863383262 Jan 14 23:48:08.580000 audit: BPF prog-id=177 op=UNLOAD Jan 14 23:48:08.580000 audit[4176]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4165 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:08.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166666164646234326530663061353536626663356439623863383262 Jan 14 23:48:08.580000 audit: BPF prog-id=179 op=LOAD Jan 14 23:48:08.580000 audit[4176]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4165 pid=4176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:08.580000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3166666164646234326530663061353536626663356439623863383262 Jan 14 23:48:08.602607 containerd[1695]: time="2026-01-14T23:48:08.602561875Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-7f94899ccb-pnwbr,Uid:63f0b6ec-9977-4e0c-b6a6-80408e82ee47,Namespace:calico-system,Attempt:0,} returns sandbox id \"1ffaddb42e0f0a556bfc5d9b8c82b1df512287da4c9686ebcf850ac0e1bef6bb\"" Jan 14 23:48:08.604252 containerd[1695]: time="2026-01-14T23:48:08.604226720Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 23:48:08.636054 kubelet[2898]: I0114 23:48:08.635975 2898 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3c2935dc-ad54-4dd0-bfa3-577b2efcfa67" path="/var/lib/kubelet/pods/3c2935dc-ad54-4dd0-bfa3-577b2efcfa67/volumes" Jan 14 23:48:08.929502 containerd[1695]: time="2026-01-14T23:48:08.929394553Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:48:08.931090 containerd[1695]: time="2026-01-14T23:48:08.931030518Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 23:48:08.931196 containerd[1695]: time="2026-01-14T23:48:08.931096398Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 23:48:08.931322 kubelet[2898]: E0114 23:48:08.931285 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:48:08.931577 kubelet[2898]: E0114 23:48:08.931338 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:48:08.931606 kubelet[2898]: E0114 23:48:08.931546 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c6dcdc7d9611441ca8bf87758bc85c38,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x7cql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f94899ccb-pnwbr_calico-system(63f0b6ec-9977-4e0c-b6a6-80408e82ee47): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 23:48:08.933729 containerd[1695]: time="2026-01-14T23:48:08.933698566Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 23:48:09.168000 audit: BPF prog-id=180 op=LOAD Jan 14 23:48:09.168000 audit[4359]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff1cf8b68 a2=98 a3=fffff1cf8b58 items=0 ppid=4222 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.168000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 23:48:09.169000 audit: BPF prog-id=180 op=UNLOAD Jan 14 23:48:09.169000 audit[4359]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff1cf8b38 a3=0 items=0 ppid=4222 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.169000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 23:48:09.169000 audit: BPF prog-id=181 op=LOAD Jan 14 23:48:09.169000 audit[4359]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff1cf8a18 a2=74 a3=95 items=0 ppid=4222 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.169000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 23:48:09.169000 audit: BPF prog-id=181 op=UNLOAD Jan 14 23:48:09.169000 audit[4359]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4222 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.169000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 23:48:09.169000 audit: BPF prog-id=182 op=LOAD Jan 14 23:48:09.169000 audit[4359]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff1cf8a48 a2=40 a3=fffff1cf8a78 items=0 ppid=4222 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.169000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 23:48:09.169000 audit: BPF prog-id=182 op=UNLOAD Jan 14 23:48:09.169000 audit[4359]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffff1cf8a78 items=0 ppid=4222 pid=4359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.169000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 14 23:48:09.171000 audit: BPF prog-id=183 op=LOAD Jan 14 23:48:09.171000 audit[4360]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc67f9278 a2=98 a3=ffffc67f9268 items=0 ppid=4222 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.171000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:48:09.171000 audit: BPF prog-id=183 op=UNLOAD Jan 14 23:48:09.171000 audit[4360]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc67f9248 a3=0 items=0 ppid=4222 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.171000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:48:09.171000 audit: BPF prog-id=184 op=LOAD Jan 14 23:48:09.171000 audit[4360]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc67f8f08 a2=74 a3=95 items=0 ppid=4222 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.171000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:48:09.171000 audit: BPF prog-id=184 op=UNLOAD Jan 14 23:48:09.171000 audit[4360]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4222 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.171000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:48:09.171000 audit: BPF prog-id=185 op=LOAD Jan 14 23:48:09.171000 audit[4360]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc67f8f68 a2=94 a3=2 items=0 ppid=4222 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.171000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:48:09.171000 audit: BPF prog-id=185 op=UNLOAD Jan 14 23:48:09.171000 audit[4360]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4222 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.171000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:48:09.258460 containerd[1695]: time="2026-01-14T23:48:09.258393198Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:48:09.260956 containerd[1695]: time="2026-01-14T23:48:09.260697965Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 23:48:09.260956 containerd[1695]: time="2026-01-14T23:48:09.260794925Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 23:48:09.261156 kubelet[2898]: E0114 23:48:09.261079 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:48:09.261235 kubelet[2898]: E0114 23:48:09.261184 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:48:09.261472 kubelet[2898]: E0114 23:48:09.261423 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x7cql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f94899ccb-pnwbr_calico-system(63f0b6ec-9977-4e0c-b6a6-80408e82ee47): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 23:48:09.262809 kubelet[2898]: E0114 23:48:09.262739 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:48:09.272000 audit: BPF prog-id=186 op=LOAD Jan 14 23:48:09.272000 audit[4360]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc67f8f28 a2=40 a3=ffffc67f8f58 items=0 ppid=4222 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.272000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:48:09.272000 audit: BPF prog-id=186 op=UNLOAD Jan 14 23:48:09.272000 audit[4360]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffc67f8f58 items=0 ppid=4222 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.272000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:48:09.282000 audit: BPF prog-id=187 op=LOAD Jan 14 23:48:09.282000 audit[4360]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc67f8f38 a2=94 a3=4 items=0 ppid=4222 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.282000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:48:09.282000 audit: BPF prog-id=187 op=UNLOAD Jan 14 23:48:09.282000 audit[4360]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4222 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.282000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:48:09.282000 audit: BPF prog-id=188 op=LOAD Jan 14 23:48:09.282000 audit[4360]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc67f8d78 a2=94 a3=5 items=0 ppid=4222 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.282000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:48:09.282000 audit: BPF prog-id=188 op=UNLOAD Jan 14 23:48:09.282000 audit[4360]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4222 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.282000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:48:09.282000 audit: BPF prog-id=189 op=LOAD Jan 14 23:48:09.282000 audit[4360]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc67f8fa8 a2=94 a3=6 items=0 ppid=4222 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.282000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:48:09.282000 audit: BPF prog-id=189 op=UNLOAD Jan 14 23:48:09.282000 audit[4360]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4222 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.282000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:48:09.282000 audit: BPF prog-id=190 op=LOAD Jan 14 23:48:09.282000 audit[4360]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc67f8778 a2=94 a3=83 items=0 ppid=4222 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.282000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:48:09.283000 audit: BPF prog-id=191 op=LOAD Jan 14 23:48:09.283000 audit[4360]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffc67f8538 a2=94 a3=2 items=0 ppid=4222 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.283000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:48:09.283000 audit: BPF prog-id=191 op=UNLOAD Jan 14 23:48:09.283000 audit[4360]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4222 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.283000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:48:09.283000 audit: BPF prog-id=190 op=UNLOAD Jan 14 23:48:09.283000 audit[4360]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=9590620 a3=9583b00 items=0 ppid=4222 pid=4360 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.283000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 14 23:48:09.292000 audit: BPF prog-id=192 op=LOAD Jan 14 23:48:09.292000 audit[4363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdfadde98 a2=98 a3=ffffdfadde88 items=0 ppid=4222 pid=4363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.292000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 23:48:09.292000 audit: BPF prog-id=192 op=UNLOAD Jan 14 23:48:09.292000 audit[4363]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffdfadde68 a3=0 items=0 ppid=4222 pid=4363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.292000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 23:48:09.292000 audit: BPF prog-id=193 op=LOAD Jan 14 23:48:09.292000 audit[4363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdfaddd48 a2=74 a3=95 items=0 ppid=4222 pid=4363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.292000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 23:48:09.292000 audit: BPF prog-id=193 op=UNLOAD Jan 14 23:48:09.292000 audit[4363]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4222 pid=4363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.292000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 23:48:09.292000 audit: BPF prog-id=194 op=LOAD Jan 14 23:48:09.292000 audit[4363]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdfaddd78 a2=40 a3=ffffdfaddda8 items=0 ppid=4222 pid=4363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.292000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 23:48:09.292000 audit: BPF prog-id=194 op=UNLOAD Jan 14 23:48:09.292000 audit[4363]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffdfaddda8 items=0 ppid=4222 pid=4363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.292000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 14 23:48:09.356151 systemd-networkd[1602]: vxlan.calico: Link UP Jan 14 23:48:09.356165 systemd-networkd[1602]: vxlan.calico: Gained carrier Jan 14 23:48:09.359000 audit: BPF prog-id=195 op=LOAD Jan 14 23:48:09.359000 audit[4387]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe8785bb8 a2=98 a3=ffffe8785ba8 items=0 ppid=4222 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.359000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:48:09.359000 audit: BPF prog-id=195 op=UNLOAD Jan 14 23:48:09.359000 audit[4387]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe8785b88 a3=0 items=0 ppid=4222 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.359000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:48:09.359000 audit: BPF prog-id=196 op=LOAD Jan 14 23:48:09.359000 audit[4387]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe8785898 a2=74 a3=95 items=0 ppid=4222 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.359000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:48:09.359000 audit: BPF prog-id=196 op=UNLOAD Jan 14 23:48:09.359000 audit[4387]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4222 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.359000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:48:09.359000 audit: BPF prog-id=197 op=LOAD Jan 14 23:48:09.359000 audit[4387]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe87858f8 a2=94 a3=2 items=0 ppid=4222 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.359000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:48:09.359000 audit: BPF prog-id=197 op=UNLOAD Jan 14 23:48:09.359000 audit[4387]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4222 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.359000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:48:09.359000 audit: BPF prog-id=198 op=LOAD Jan 14 23:48:09.359000 audit[4387]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe8785778 a2=40 a3=ffffe87857a8 items=0 ppid=4222 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.359000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:48:09.359000 audit: BPF prog-id=198 op=UNLOAD Jan 14 23:48:09.359000 audit[4387]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffe87857a8 items=0 ppid=4222 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.359000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:48:09.359000 audit: BPF prog-id=199 op=LOAD Jan 14 23:48:09.359000 audit[4387]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe87858c8 a2=94 a3=b7 items=0 ppid=4222 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.359000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:48:09.359000 audit: BPF prog-id=199 op=UNLOAD Jan 14 23:48:09.359000 audit[4387]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4222 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.359000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:48:09.361000 audit: BPF prog-id=200 op=LOAD Jan 14 23:48:09.361000 audit[4387]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe8784f78 a2=94 a3=2 items=0 ppid=4222 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.361000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:48:09.361000 audit: BPF prog-id=200 op=UNLOAD Jan 14 23:48:09.361000 audit[4387]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4222 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.361000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:48:09.361000 audit: BPF prog-id=201 op=LOAD Jan 14 23:48:09.361000 audit[4387]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe8785108 a2=94 a3=30 items=0 ppid=4222 pid=4387 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.361000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 14 23:48:09.365000 audit: BPF prog-id=202 op=LOAD Jan 14 23:48:09.365000 audit[4393]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe1390df8 a2=98 a3=ffffe1390de8 items=0 ppid=4222 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.365000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:48:09.365000 audit: BPF prog-id=202 op=UNLOAD Jan 14 23:48:09.365000 audit[4393]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe1390dc8 a3=0 items=0 ppid=4222 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.365000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:48:09.365000 audit: BPF prog-id=203 op=LOAD Jan 14 23:48:09.365000 audit[4393]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe1390a88 a2=74 a3=95 items=0 ppid=4222 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.365000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:48:09.365000 audit: BPF prog-id=203 op=UNLOAD Jan 14 23:48:09.365000 audit[4393]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4222 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.365000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:48:09.365000 audit: BPF prog-id=204 op=LOAD Jan 14 23:48:09.365000 audit[4393]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe1390ae8 a2=94 a3=2 items=0 ppid=4222 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.365000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:48:09.365000 audit: BPF prog-id=204 op=UNLOAD Jan 14 23:48:09.365000 audit[4393]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4222 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.365000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:48:09.469000 audit: BPF prog-id=205 op=LOAD Jan 14 23:48:09.469000 audit[4393]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe1390aa8 a2=40 a3=ffffe1390ad8 items=0 ppid=4222 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.469000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:48:09.469000 audit: BPF prog-id=205 op=UNLOAD Jan 14 23:48:09.469000 audit[4393]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffe1390ad8 items=0 ppid=4222 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.469000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:48:09.480000 audit: BPF prog-id=206 op=LOAD Jan 14 23:48:09.480000 audit[4393]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe1390ab8 a2=94 a3=4 items=0 ppid=4222 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.480000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:48:09.480000 audit: BPF prog-id=206 op=UNLOAD Jan 14 23:48:09.480000 audit[4393]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4222 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.480000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:48:09.480000 audit: BPF prog-id=207 op=LOAD Jan 14 23:48:09.480000 audit[4393]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe13908f8 a2=94 a3=5 items=0 ppid=4222 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.480000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:48:09.480000 audit: BPF prog-id=207 op=UNLOAD Jan 14 23:48:09.480000 audit[4393]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4222 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.480000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:48:09.480000 audit: BPF prog-id=208 op=LOAD Jan 14 23:48:09.480000 audit[4393]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe1390b28 a2=94 a3=6 items=0 ppid=4222 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.480000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:48:09.480000 audit: BPF prog-id=208 op=UNLOAD Jan 14 23:48:09.480000 audit[4393]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4222 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.480000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:48:09.480000 audit: BPF prog-id=209 op=LOAD Jan 14 23:48:09.480000 audit[4393]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe13902f8 a2=94 a3=83 items=0 ppid=4222 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.480000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:48:09.481000 audit: BPF prog-id=210 op=LOAD Jan 14 23:48:09.481000 audit[4393]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffe13900b8 a2=94 a3=2 items=0 ppid=4222 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.481000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:48:09.481000 audit: BPF prog-id=210 op=UNLOAD Jan 14 23:48:09.481000 audit[4393]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4222 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.481000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:48:09.481000 audit: BPF prog-id=209 op=UNLOAD Jan 14 23:48:09.481000 audit[4393]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=1af3e620 a3=1af31b00 items=0 ppid=4222 pid=4393 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.481000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 14 23:48:09.489000 audit: BPF prog-id=201 op=UNLOAD Jan 14 23:48:09.489000 audit[4222]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4000e5e880 a2=0 a3=0 items=0 ppid=4206 pid=4222 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.489000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 14 23:48:09.541000 audit[4418]: NETFILTER_CFG table=nat:121 family=2 entries=15 op=nft_register_chain pid=4418 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:48:09.541000 audit[4418]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffcd9ca630 a2=0 a3=ffffbd880fa8 items=0 ppid=4222 pid=4418 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.541000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:48:09.543000 audit[4420]: NETFILTER_CFG table=mangle:122 family=2 entries=16 op=nft_register_chain pid=4420 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:48:09.543000 audit[4420]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffec63b590 a2=0 a3=ffff8f2e7fa8 items=0 ppid=4222 pid=4420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.543000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:48:09.551000 audit[4419]: NETFILTER_CFG table=raw:123 family=2 entries=21 op=nft_register_chain pid=4419 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:48:09.551000 audit[4419]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffcfb1af30 a2=0 a3=ffffa973bfa8 items=0 ppid=4222 pid=4419 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.551000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:48:09.552000 audit[4422]: NETFILTER_CFG table=filter:124 family=2 entries=94 op=nft_register_chain pid=4422 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:48:09.552000 audit[4422]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffe7a717a0 a2=0 a3=ffff9e753fa8 items=0 ppid=4222 pid=4422 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:09.552000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:48:09.990112 kubelet[2898]: E0114 23:48:09.989967 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:48:10.011000 audit[4433]: NETFILTER_CFG table=filter:125 family=2 entries=20 op=nft_register_rule pid=4433 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:48:10.011000 audit[4433]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc886aa20 a2=0 a3=1 items=0 ppid=3023 pid=4433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:10.011000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:48:10.021000 audit[4433]: NETFILTER_CFG table=nat:126 family=2 entries=14 op=nft_register_rule pid=4433 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:48:10.021000 audit[4433]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffc886aa20 a2=0 a3=1 items=0 ppid=3023 pid=4433 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:10.021000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:48:10.429462 systemd-networkd[1602]: cali1722fd7de71: Gained IPv6LL Jan 14 23:48:11.196403 systemd-networkd[1602]: vxlan.calico: Gained IPv6LL Jan 14 23:48:12.633448 containerd[1695]: time="2026-01-14T23:48:12.633389508Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2lqxs,Uid:5c454d6a-8fe3-46dd-a39b-d216b7be481d,Namespace:calico-system,Attempt:0,}" Jan 14 23:48:12.732121 systemd-networkd[1602]: cali4d25f1fb808: Link UP Jan 14 23:48:12.732992 systemd-networkd[1602]: cali4d25f1fb808: Gained carrier Jan 14 23:48:12.747226 containerd[1695]: 2026-01-14 23:48:12.670 [INFO][4437] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--n--1d3be4f164-k8s-csi--node--driver--2lqxs-eth0 csi-node-driver- calico-system 5c454d6a-8fe3-46dd-a39b-d216b7be481d 702 0 2026-01-14 23:45:42 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4515-1-0-n-1d3be4f164 csi-node-driver-2lqxs eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali4d25f1fb808 [] [] }} ContainerID="a57a2bd96525178c049a232772b07d6b9769f87fdcec2dab27a0a9bb82c2fadc" Namespace="calico-system" Pod="csi-node-driver-2lqxs" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-csi--node--driver--2lqxs-" Jan 14 23:48:12.747226 containerd[1695]: 2026-01-14 23:48:12.670 [INFO][4437] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a57a2bd96525178c049a232772b07d6b9769f87fdcec2dab27a0a9bb82c2fadc" Namespace="calico-system" Pod="csi-node-driver-2lqxs" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-csi--node--driver--2lqxs-eth0" Jan 14 23:48:12.747226 containerd[1695]: 2026-01-14 23:48:12.692 [INFO][4452] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a57a2bd96525178c049a232772b07d6b9769f87fdcec2dab27a0a9bb82c2fadc" HandleID="k8s-pod-network.a57a2bd96525178c049a232772b07d6b9769f87fdcec2dab27a0a9bb82c2fadc" Workload="ci--4515--1--0--n--1d3be4f164-k8s-csi--node--driver--2lqxs-eth0" Jan 14 23:48:12.747443 containerd[1695]: 2026-01-14 23:48:12.692 [INFO][4452] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a57a2bd96525178c049a232772b07d6b9769f87fdcec2dab27a0a9bb82c2fadc" HandleID="k8s-pod-network.a57a2bd96525178c049a232772b07d6b9769f87fdcec2dab27a0a9bb82c2fadc" Workload="ci--4515--1--0--n--1d3be4f164-k8s-csi--node--driver--2lqxs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002dcfc0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515-1-0-n-1d3be4f164", "pod":"csi-node-driver-2lqxs", "timestamp":"2026-01-14 23:48:12.692739209 +0000 UTC"}, Hostname:"ci-4515-1-0-n-1d3be4f164", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 23:48:12.747443 containerd[1695]: 2026-01-14 23:48:12.692 [INFO][4452] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 23:48:12.747443 containerd[1695]: 2026-01-14 23:48:12.692 [INFO][4452] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 23:48:12.747443 containerd[1695]: 2026-01-14 23:48:12.692 [INFO][4452] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-n-1d3be4f164' Jan 14 23:48:12.747443 containerd[1695]: 2026-01-14 23:48:12.701 [INFO][4452] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a57a2bd96525178c049a232772b07d6b9769f87fdcec2dab27a0a9bb82c2fadc" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:12.747443 containerd[1695]: 2026-01-14 23:48:12.706 [INFO][4452] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:12.747443 containerd[1695]: 2026-01-14 23:48:12.711 [INFO][4452] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:12.747443 containerd[1695]: 2026-01-14 23:48:12.713 [INFO][4452] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:12.747443 containerd[1695]: 2026-01-14 23:48:12.715 [INFO][4452] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:12.747619 containerd[1695]: 2026-01-14 23:48:12.715 [INFO][4452] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.a57a2bd96525178c049a232772b07d6b9769f87fdcec2dab27a0a9bb82c2fadc" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:12.747619 containerd[1695]: 2026-01-14 23:48:12.716 [INFO][4452] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a57a2bd96525178c049a232772b07d6b9769f87fdcec2dab27a0a9bb82c2fadc Jan 14 23:48:12.747619 containerd[1695]: 2026-01-14 23:48:12.721 [INFO][4452] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.a57a2bd96525178c049a232772b07d6b9769f87fdcec2dab27a0a9bb82c2fadc" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:12.747619 containerd[1695]: 2026-01-14 23:48:12.727 [INFO][4452] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.130/26] block=192.168.21.128/26 handle="k8s-pod-network.a57a2bd96525178c049a232772b07d6b9769f87fdcec2dab27a0a9bb82c2fadc" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:12.747619 containerd[1695]: 2026-01-14 23:48:12.727 [INFO][4452] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.130/26] handle="k8s-pod-network.a57a2bd96525178c049a232772b07d6b9769f87fdcec2dab27a0a9bb82c2fadc" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:12.747619 containerd[1695]: 2026-01-14 23:48:12.728 [INFO][4452] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 23:48:12.747619 containerd[1695]: 2026-01-14 23:48:12.728 [INFO][4452] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.130/26] IPv6=[] ContainerID="a57a2bd96525178c049a232772b07d6b9769f87fdcec2dab27a0a9bb82c2fadc" HandleID="k8s-pod-network.a57a2bd96525178c049a232772b07d6b9769f87fdcec2dab27a0a9bb82c2fadc" Workload="ci--4515--1--0--n--1d3be4f164-k8s-csi--node--driver--2lqxs-eth0" Jan 14 23:48:12.747744 containerd[1695]: 2026-01-14 23:48:12.729 [INFO][4437] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a57a2bd96525178c049a232772b07d6b9769f87fdcec2dab27a0a9bb82c2fadc" Namespace="calico-system" Pod="csi-node-driver-2lqxs" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-csi--node--driver--2lqxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--1d3be4f164-k8s-csi--node--driver--2lqxs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5c454d6a-8fe3-46dd-a39b-d216b7be481d", ResourceVersion:"702", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 45, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-1d3be4f164", ContainerID:"", Pod:"csi-node-driver-2lqxs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.21.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4d25f1fb808", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:48:12.747797 containerd[1695]: 2026-01-14 23:48:12.730 [INFO][4437] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.130/32] ContainerID="a57a2bd96525178c049a232772b07d6b9769f87fdcec2dab27a0a9bb82c2fadc" Namespace="calico-system" Pod="csi-node-driver-2lqxs" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-csi--node--driver--2lqxs-eth0" Jan 14 23:48:12.747797 containerd[1695]: 2026-01-14 23:48:12.730 [INFO][4437] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali4d25f1fb808 ContainerID="a57a2bd96525178c049a232772b07d6b9769f87fdcec2dab27a0a9bb82c2fadc" Namespace="calico-system" Pod="csi-node-driver-2lqxs" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-csi--node--driver--2lqxs-eth0" Jan 14 23:48:12.747797 containerd[1695]: 2026-01-14 23:48:12.732 [INFO][4437] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a57a2bd96525178c049a232772b07d6b9769f87fdcec2dab27a0a9bb82c2fadc" Namespace="calico-system" Pod="csi-node-driver-2lqxs" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-csi--node--driver--2lqxs-eth0" Jan 14 23:48:12.747854 containerd[1695]: 2026-01-14 23:48:12.733 [INFO][4437] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a57a2bd96525178c049a232772b07d6b9769f87fdcec2dab27a0a9bb82c2fadc" Namespace="calico-system" Pod="csi-node-driver-2lqxs" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-csi--node--driver--2lqxs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--1d3be4f164-k8s-csi--node--driver--2lqxs-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"5c454d6a-8fe3-46dd-a39b-d216b7be481d", ResourceVersion:"702", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 45, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-1d3be4f164", ContainerID:"a57a2bd96525178c049a232772b07d6b9769f87fdcec2dab27a0a9bb82c2fadc", Pod:"csi-node-driver-2lqxs", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.21.130/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali4d25f1fb808", MAC:"ca:d8:8f:c8:63:39", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:48:12.747910 containerd[1695]: 2026-01-14 23:48:12.744 [INFO][4437] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a57a2bd96525178c049a232772b07d6b9769f87fdcec2dab27a0a9bb82c2fadc" Namespace="calico-system" Pod="csi-node-driver-2lqxs" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-csi--node--driver--2lqxs-eth0" Jan 14 23:48:12.757000 audit[4469]: NETFILTER_CFG table=filter:127 family=2 entries=36 op=nft_register_chain pid=4469 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:48:12.759435 kernel: kauditd_printk_skb: 231 callbacks suppressed Jan 14 23:48:12.759485 kernel: audit: type=1325 audit(1768434492.757:656): table=filter:127 family=2 entries=36 op=nft_register_chain pid=4469 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:48:12.757000 audit[4469]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19576 a0=3 a1=fffffddb0470 a2=0 a3=ffffb50d0fa8 items=0 ppid=4222 pid=4469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:12.765035 kernel: audit: type=1300 audit(1768434492.757:656): arch=c00000b7 syscall=211 success=yes exit=19576 a0=3 a1=fffffddb0470 a2=0 a3=ffffb50d0fa8 items=0 ppid=4222 pid=4469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:12.757000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:48:12.769291 kernel: audit: type=1327 audit(1768434492.757:656): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:48:12.769865 containerd[1695]: time="2026-01-14T23:48:12.769825165Z" level=info msg="connecting to shim a57a2bd96525178c049a232772b07d6b9769f87fdcec2dab27a0a9bb82c2fadc" address="unix:///run/containerd/s/5278d2ed12a114de07bee3ff8c7a2a0f899d9bd3ca5a2a715a6cf2ae1a352466" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:48:12.796530 systemd[1]: Started cri-containerd-a57a2bd96525178c049a232772b07d6b9769f87fdcec2dab27a0a9bb82c2fadc.scope - libcontainer container a57a2bd96525178c049a232772b07d6b9769f87fdcec2dab27a0a9bb82c2fadc. Jan 14 23:48:12.804000 audit: BPF prog-id=211 op=LOAD Jan 14 23:48:12.805000 audit: BPF prog-id=212 op=LOAD Jan 14 23:48:12.807525 kernel: audit: type=1334 audit(1768434492.804:657): prog-id=211 op=LOAD Jan 14 23:48:12.807565 kernel: audit: type=1334 audit(1768434492.805:658): prog-id=212 op=LOAD Jan 14 23:48:12.805000 audit[4489]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4479 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:12.811160 kernel: audit: type=1300 audit(1768434492.805:658): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4479 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:12.805000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135376132626439363532353137386330343961323332373732623037 Jan 14 23:48:12.814308 kernel: audit: type=1327 audit(1768434492.805:658): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135376132626439363532353137386330343961323332373732623037 Jan 14 23:48:12.814404 kernel: audit: type=1334 audit(1768434492.806:659): prog-id=212 op=UNLOAD Jan 14 23:48:12.806000 audit: BPF prog-id=212 op=UNLOAD Jan 14 23:48:12.806000 audit[4489]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4479 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:12.817970 kernel: audit: type=1300 audit(1768434492.806:659): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4479 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:12.806000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135376132626439363532353137386330343961323332373732623037 Jan 14 23:48:12.821092 kernel: audit: type=1327 audit(1768434492.806:659): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135376132626439363532353137386330343961323332373732623037 Jan 14 23:48:12.806000 audit: BPF prog-id=213 op=LOAD Jan 14 23:48:12.806000 audit[4489]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4479 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:12.806000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135376132626439363532353137386330343961323332373732623037 Jan 14 23:48:12.806000 audit: BPF prog-id=214 op=LOAD Jan 14 23:48:12.806000 audit[4489]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4479 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:12.806000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135376132626439363532353137386330343961323332373732623037 Jan 14 23:48:12.806000 audit: BPF prog-id=214 op=UNLOAD Jan 14 23:48:12.806000 audit[4489]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4479 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:12.806000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135376132626439363532353137386330343961323332373732623037 Jan 14 23:48:12.806000 audit: BPF prog-id=213 op=UNLOAD Jan 14 23:48:12.806000 audit[4489]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4479 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:12.806000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135376132626439363532353137386330343961323332373732623037 Jan 14 23:48:12.806000 audit: BPF prog-id=215 op=LOAD Jan 14 23:48:12.806000 audit[4489]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4479 pid=4489 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:12.806000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6135376132626439363532353137386330343961323332373732623037 Jan 14 23:48:12.837093 containerd[1695]: time="2026-01-14T23:48:12.837037810Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-2lqxs,Uid:5c454d6a-8fe3-46dd-a39b-d216b7be481d,Namespace:calico-system,Attempt:0,} returns sandbox id \"a57a2bd96525178c049a232772b07d6b9769f87fdcec2dab27a0a9bb82c2fadc\"" Jan 14 23:48:12.841427 containerd[1695]: time="2026-01-14T23:48:12.841133622Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 23:48:13.169764 containerd[1695]: time="2026-01-14T23:48:13.169672426Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:48:13.171063 containerd[1695]: time="2026-01-14T23:48:13.171027430Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 23:48:13.171126 containerd[1695]: time="2026-01-14T23:48:13.171059230Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 23:48:13.171247 kubelet[2898]: E0114 23:48:13.171213 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:48:13.171597 kubelet[2898]: E0114 23:48:13.171257 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:48:13.171597 kubelet[2898]: E0114 23:48:13.171386 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wx4j2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2lqxs_calico-system(5c454d6a-8fe3-46dd-a39b-d216b7be481d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 23:48:13.173422 containerd[1695]: time="2026-01-14T23:48:13.173328037Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 23:48:13.513382 containerd[1695]: time="2026-01-14T23:48:13.513320796Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:48:13.515320 containerd[1695]: time="2026-01-14T23:48:13.515199042Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 23:48:13.515428 containerd[1695]: time="2026-01-14T23:48:13.515288442Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 23:48:13.515839 kubelet[2898]: E0114 23:48:13.515621 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:48:13.515839 kubelet[2898]: E0114 23:48:13.515665 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:48:13.515839 kubelet[2898]: E0114 23:48:13.515771 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wx4j2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2lqxs_calico-system(5c454d6a-8fe3-46dd-a39b-d216b7be481d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 23:48:13.516982 kubelet[2898]: E0114 23:48:13.516943 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:48:13.632337 containerd[1695]: time="2026-01-14T23:48:13.632296559Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b767987c5-49kdx,Uid:5eca9ff5-ed57-4795-b82c-c2e2b81c8474,Namespace:calico-apiserver,Attempt:0,}" Jan 14 23:48:13.729379 systemd-networkd[1602]: cali286487d9b98: Link UP Jan 14 23:48:13.729972 systemd-networkd[1602]: cali286487d9b98: Gained carrier Jan 14 23:48:13.744244 containerd[1695]: 2026-01-14 23:48:13.668 [INFO][4515] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--n--1d3be4f164-k8s-calico--apiserver--5b767987c5--49kdx-eth0 calico-apiserver-5b767987c5- calico-apiserver 5eca9ff5-ed57-4795-b82c-c2e2b81c8474 1072 0 2026-01-14 23:45:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b767987c5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515-1-0-n-1d3be4f164 calico-apiserver-5b767987c5-49kdx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali286487d9b98 [] [] }} ContainerID="2a8f7fb1d2b614b8e3cd46472795e9dd7037206b3514ea221f421b2d649a001d" Namespace="calico-apiserver" Pod="calico-apiserver-5b767987c5-49kdx" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-calico--apiserver--5b767987c5--49kdx-" Jan 14 23:48:13.744244 containerd[1695]: 2026-01-14 23:48:13.668 [INFO][4515] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2a8f7fb1d2b614b8e3cd46472795e9dd7037206b3514ea221f421b2d649a001d" Namespace="calico-apiserver" Pod="calico-apiserver-5b767987c5-49kdx" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-calico--apiserver--5b767987c5--49kdx-eth0" Jan 14 23:48:13.744244 containerd[1695]: 2026-01-14 23:48:13.689 [INFO][4530] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a8f7fb1d2b614b8e3cd46472795e9dd7037206b3514ea221f421b2d649a001d" HandleID="k8s-pod-network.2a8f7fb1d2b614b8e3cd46472795e9dd7037206b3514ea221f421b2d649a001d" Workload="ci--4515--1--0--n--1d3be4f164-k8s-calico--apiserver--5b767987c5--49kdx-eth0" Jan 14 23:48:13.744895 containerd[1695]: 2026-01-14 23:48:13.689 [INFO][4530] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2a8f7fb1d2b614b8e3cd46472795e9dd7037206b3514ea221f421b2d649a001d" HandleID="k8s-pod-network.2a8f7fb1d2b614b8e3cd46472795e9dd7037206b3514ea221f421b2d649a001d" Workload="ci--4515--1--0--n--1d3be4f164-k8s-calico--apiserver--5b767987c5--49kdx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40001375e0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515-1-0-n-1d3be4f164", "pod":"calico-apiserver-5b767987c5-49kdx", "timestamp":"2026-01-14 23:48:13.689566134 +0000 UTC"}, Hostname:"ci-4515-1-0-n-1d3be4f164", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 23:48:13.744895 containerd[1695]: 2026-01-14 23:48:13.689 [INFO][4530] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 23:48:13.744895 containerd[1695]: 2026-01-14 23:48:13.689 [INFO][4530] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 23:48:13.744895 containerd[1695]: 2026-01-14 23:48:13.689 [INFO][4530] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-n-1d3be4f164' Jan 14 23:48:13.744895 containerd[1695]: 2026-01-14 23:48:13.699 [INFO][4530] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2a8f7fb1d2b614b8e3cd46472795e9dd7037206b3514ea221f421b2d649a001d" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:13.744895 containerd[1695]: 2026-01-14 23:48:13.704 [INFO][4530] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:13.744895 containerd[1695]: 2026-01-14 23:48:13.708 [INFO][4530] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:13.744895 containerd[1695]: 2026-01-14 23:48:13.710 [INFO][4530] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:13.744895 containerd[1695]: 2026-01-14 23:48:13.712 [INFO][4530] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:13.745228 containerd[1695]: 2026-01-14 23:48:13.712 [INFO][4530] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.2a8f7fb1d2b614b8e3cd46472795e9dd7037206b3514ea221f421b2d649a001d" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:13.745228 containerd[1695]: 2026-01-14 23:48:13.714 [INFO][4530] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2a8f7fb1d2b614b8e3cd46472795e9dd7037206b3514ea221f421b2d649a001d Jan 14 23:48:13.745228 containerd[1695]: 2026-01-14 23:48:13.719 [INFO][4530] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.2a8f7fb1d2b614b8e3cd46472795e9dd7037206b3514ea221f421b2d649a001d" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:13.745228 containerd[1695]: 2026-01-14 23:48:13.725 [INFO][4530] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.131/26] block=192.168.21.128/26 handle="k8s-pod-network.2a8f7fb1d2b614b8e3cd46472795e9dd7037206b3514ea221f421b2d649a001d" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:13.745228 containerd[1695]: 2026-01-14 23:48:13.725 [INFO][4530] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.131/26] handle="k8s-pod-network.2a8f7fb1d2b614b8e3cd46472795e9dd7037206b3514ea221f421b2d649a001d" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:13.745228 containerd[1695]: 2026-01-14 23:48:13.725 [INFO][4530] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 23:48:13.745228 containerd[1695]: 2026-01-14 23:48:13.725 [INFO][4530] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.131/26] IPv6=[] ContainerID="2a8f7fb1d2b614b8e3cd46472795e9dd7037206b3514ea221f421b2d649a001d" HandleID="k8s-pod-network.2a8f7fb1d2b614b8e3cd46472795e9dd7037206b3514ea221f421b2d649a001d" Workload="ci--4515--1--0--n--1d3be4f164-k8s-calico--apiserver--5b767987c5--49kdx-eth0" Jan 14 23:48:13.745625 containerd[1695]: 2026-01-14 23:48:13.727 [INFO][4515] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2a8f7fb1d2b614b8e3cd46472795e9dd7037206b3514ea221f421b2d649a001d" Namespace="calico-apiserver" Pod="calico-apiserver-5b767987c5-49kdx" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-calico--apiserver--5b767987c5--49kdx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--1d3be4f164-k8s-calico--apiserver--5b767987c5--49kdx-eth0", GenerateName:"calico-apiserver-5b767987c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"5eca9ff5-ed57-4795-b82c-c2e2b81c8474", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 45, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b767987c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-1d3be4f164", ContainerID:"", Pod:"calico-apiserver-5b767987c5-49kdx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali286487d9b98", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:48:13.745718 containerd[1695]: 2026-01-14 23:48:13.727 [INFO][4515] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.131/32] ContainerID="2a8f7fb1d2b614b8e3cd46472795e9dd7037206b3514ea221f421b2d649a001d" Namespace="calico-apiserver" Pod="calico-apiserver-5b767987c5-49kdx" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-calico--apiserver--5b767987c5--49kdx-eth0" Jan 14 23:48:13.745718 containerd[1695]: 2026-01-14 23:48:13.727 [INFO][4515] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali286487d9b98 ContainerID="2a8f7fb1d2b614b8e3cd46472795e9dd7037206b3514ea221f421b2d649a001d" Namespace="calico-apiserver" Pod="calico-apiserver-5b767987c5-49kdx" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-calico--apiserver--5b767987c5--49kdx-eth0" Jan 14 23:48:13.745718 containerd[1695]: 2026-01-14 23:48:13.729 [INFO][4515] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a8f7fb1d2b614b8e3cd46472795e9dd7037206b3514ea221f421b2d649a001d" Namespace="calico-apiserver" Pod="calico-apiserver-5b767987c5-49kdx" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-calico--apiserver--5b767987c5--49kdx-eth0" Jan 14 23:48:13.745812 containerd[1695]: 2026-01-14 23:48:13.729 [INFO][4515] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2a8f7fb1d2b614b8e3cd46472795e9dd7037206b3514ea221f421b2d649a001d" Namespace="calico-apiserver" Pod="calico-apiserver-5b767987c5-49kdx" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-calico--apiserver--5b767987c5--49kdx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--1d3be4f164-k8s-calico--apiserver--5b767987c5--49kdx-eth0", GenerateName:"calico-apiserver-5b767987c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"5eca9ff5-ed57-4795-b82c-c2e2b81c8474", ResourceVersion:"1072", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 45, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b767987c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-1d3be4f164", ContainerID:"2a8f7fb1d2b614b8e3cd46472795e9dd7037206b3514ea221f421b2d649a001d", Pod:"calico-apiserver-5b767987c5-49kdx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.131/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali286487d9b98", MAC:"2e:c4:f2:33:dc:2c", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:48:13.745961 containerd[1695]: 2026-01-14 23:48:13.742 [INFO][4515] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2a8f7fb1d2b614b8e3cd46472795e9dd7037206b3514ea221f421b2d649a001d" Namespace="calico-apiserver" Pod="calico-apiserver-5b767987c5-49kdx" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-calico--apiserver--5b767987c5--49kdx-eth0" Jan 14 23:48:13.759000 audit[4547]: NETFILTER_CFG table=filter:128 family=2 entries=54 op=nft_register_chain pid=4547 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:48:13.759000 audit[4547]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29396 a0=3 a1=ffffcdb6d4c0 a2=0 a3=ffff849c5fa8 items=0 ppid=4222 pid=4547 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:13.759000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:48:13.770029 containerd[1695]: time="2026-01-14T23:48:13.769928020Z" level=info msg="connecting to shim 2a8f7fb1d2b614b8e3cd46472795e9dd7037206b3514ea221f421b2d649a001d" address="unix:///run/containerd/s/ce6d83d384281ff0b0633a14e0fc5e28f7ef451ddb38667102c6e0c35efb2ddd" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:48:13.790714 systemd[1]: Started cri-containerd-2a8f7fb1d2b614b8e3cd46472795e9dd7037206b3514ea221f421b2d649a001d.scope - libcontainer container 2a8f7fb1d2b614b8e3cd46472795e9dd7037206b3514ea221f421b2d649a001d. Jan 14 23:48:13.799000 audit: BPF prog-id=216 op=LOAD Jan 14 23:48:13.799000 audit: BPF prog-id=217 op=LOAD Jan 14 23:48:13.799000 audit[4568]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4557 pid=4568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:13.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261386637666231643262363134623865336364343634373237393565 Jan 14 23:48:13.799000 audit: BPF prog-id=217 op=UNLOAD Jan 14 23:48:13.799000 audit[4568]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4557 pid=4568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:13.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261386637666231643262363134623865336364343634373237393565 Jan 14 23:48:13.799000 audit: BPF prog-id=218 op=LOAD Jan 14 23:48:13.799000 audit[4568]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4557 pid=4568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:13.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261386637666231643262363134623865336364343634373237393565 Jan 14 23:48:13.800000 audit: BPF prog-id=219 op=LOAD Jan 14 23:48:13.800000 audit[4568]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4557 pid=4568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:13.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261386637666231643262363134623865336364343634373237393565 Jan 14 23:48:13.800000 audit: BPF prog-id=219 op=UNLOAD Jan 14 23:48:13.800000 audit[4568]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4557 pid=4568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:13.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261386637666231643262363134623865336364343634373237393565 Jan 14 23:48:13.800000 audit: BPF prog-id=218 op=UNLOAD Jan 14 23:48:13.800000 audit[4568]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4557 pid=4568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:13.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261386637666231643262363134623865336364343634373237393565 Jan 14 23:48:13.800000 audit: BPF prog-id=220 op=LOAD Jan 14 23:48:13.800000 audit[4568]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4557 pid=4568 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:13.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261386637666231643262363134623865336364343634373237393565 Jan 14 23:48:13.821830 containerd[1695]: time="2026-01-14T23:48:13.821791618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b767987c5-49kdx,Uid:5eca9ff5-ed57-4795-b82c-c2e2b81c8474,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2a8f7fb1d2b614b8e3cd46472795e9dd7037206b3514ea221f421b2d649a001d\"" Jan 14 23:48:13.823183 containerd[1695]: time="2026-01-14T23:48:13.823150022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:48:13.999095 kubelet[2898]: E0114 23:48:13.999009 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:48:14.166640 containerd[1695]: time="2026-01-14T23:48:14.166411751Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:48:14.168908 containerd[1695]: time="2026-01-14T23:48:14.168860438Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:48:14.169063 containerd[1695]: time="2026-01-14T23:48:14.168891798Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:48:14.169292 kubelet[2898]: E0114 23:48:14.169224 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:48:14.169441 kubelet[2898]: E0114 23:48:14.169377 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:48:14.169663 kubelet[2898]: E0114 23:48:14.169602 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tfj59,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b767987c5-49kdx_calico-apiserver(5eca9ff5-ed57-4795-b82c-c2e2b81c8474): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:48:14.170808 kubelet[2898]: E0114 23:48:14.170774 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:48:14.461352 systemd-networkd[1602]: cali4d25f1fb808: Gained IPv6LL Jan 14 23:48:14.780497 systemd-networkd[1602]: cali286487d9b98: Gained IPv6LL Jan 14 23:48:15.000418 kubelet[2898]: E0114 23:48:15.000375 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:48:15.019000 audit[4601]: NETFILTER_CFG table=filter:129 family=2 entries=20 op=nft_register_rule pid=4601 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:48:15.019000 audit[4601]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffe63ed7d0 a2=0 a3=1 items=0 ppid=3023 pid=4601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:15.019000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:48:15.025000 audit[4601]: NETFILTER_CFG table=nat:130 family=2 entries=14 op=nft_register_rule pid=4601 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:48:15.025000 audit[4601]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffe63ed7d0 a2=0 a3=1 items=0 ppid=3023 pid=4601 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:15.025000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:48:15.631898 containerd[1695]: time="2026-01-14T23:48:15.631782747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t9xqm,Uid:f47a1c23-1d14-45a5-9fef-8bb462878104,Namespace:kube-system,Attempt:0,}" Jan 14 23:48:15.632647 containerd[1695]: time="2026-01-14T23:48:15.632233829Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5sxpk,Uid:fcec49c5-6358-46d9-9922-8a81fb4bafd8,Namespace:calico-system,Attempt:0,}" Jan 14 23:48:15.765391 systemd-networkd[1602]: cali30b4553f055: Link UP Jan 14 23:48:15.766193 systemd-networkd[1602]: cali30b4553f055: Gained carrier Jan 14 23:48:15.783711 containerd[1695]: 2026-01-14 23:48:15.684 [INFO][4603] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--n--1d3be4f164-k8s-coredns--668d6bf9bc--t9xqm-eth0 coredns-668d6bf9bc- kube-system f47a1c23-1d14-45a5-9fef-8bb462878104 1065 0 2026-01-14 23:45:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515-1-0-n-1d3be4f164 coredns-668d6bf9bc-t9xqm eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali30b4553f055 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c0b73d2953592750926f9d7f94522462b916ce3611c9c6ef42c310cd134eb3c2" Namespace="kube-system" Pod="coredns-668d6bf9bc-t9xqm" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-coredns--668d6bf9bc--t9xqm-" Jan 14 23:48:15.783711 containerd[1695]: 2026-01-14 23:48:15.685 [INFO][4603] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c0b73d2953592750926f9d7f94522462b916ce3611c9c6ef42c310cd134eb3c2" Namespace="kube-system" Pod="coredns-668d6bf9bc-t9xqm" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-coredns--668d6bf9bc--t9xqm-eth0" Jan 14 23:48:15.783711 containerd[1695]: 2026-01-14 23:48:15.715 [INFO][4632] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c0b73d2953592750926f9d7f94522462b916ce3611c9c6ef42c310cd134eb3c2" HandleID="k8s-pod-network.c0b73d2953592750926f9d7f94522462b916ce3611c9c6ef42c310cd134eb3c2" Workload="ci--4515--1--0--n--1d3be4f164-k8s-coredns--668d6bf9bc--t9xqm-eth0" Jan 14 23:48:15.783895 containerd[1695]: 2026-01-14 23:48:15.715 [INFO][4632] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c0b73d2953592750926f9d7f94522462b916ce3611c9c6ef42c310cd134eb3c2" HandleID="k8s-pod-network.c0b73d2953592750926f9d7f94522462b916ce3611c9c6ef42c310cd134eb3c2" Workload="ci--4515--1--0--n--1d3be4f164-k8s-coredns--668d6bf9bc--t9xqm-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137e40), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515-1-0-n-1d3be4f164", "pod":"coredns-668d6bf9bc-t9xqm", "timestamp":"2026-01-14 23:48:15.715406283 +0000 UTC"}, Hostname:"ci-4515-1-0-n-1d3be4f164", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 23:48:15.783895 containerd[1695]: 2026-01-14 23:48:15.715 [INFO][4632] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 23:48:15.783895 containerd[1695]: 2026-01-14 23:48:15.715 [INFO][4632] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 23:48:15.783895 containerd[1695]: 2026-01-14 23:48:15.715 [INFO][4632] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-n-1d3be4f164' Jan 14 23:48:15.783895 containerd[1695]: 2026-01-14 23:48:15.726 [INFO][4632] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c0b73d2953592750926f9d7f94522462b916ce3611c9c6ef42c310cd134eb3c2" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:15.783895 containerd[1695]: 2026-01-14 23:48:15.732 [INFO][4632] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:15.783895 containerd[1695]: 2026-01-14 23:48:15.738 [INFO][4632] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:15.783895 containerd[1695]: 2026-01-14 23:48:15.740 [INFO][4632] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:15.783895 containerd[1695]: 2026-01-14 23:48:15.742 [INFO][4632] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:15.784077 containerd[1695]: 2026-01-14 23:48:15.742 [INFO][4632] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.c0b73d2953592750926f9d7f94522462b916ce3611c9c6ef42c310cd134eb3c2" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:15.784077 containerd[1695]: 2026-01-14 23:48:15.744 [INFO][4632] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c0b73d2953592750926f9d7f94522462b916ce3611c9c6ef42c310cd134eb3c2 Jan 14 23:48:15.784077 containerd[1695]: 2026-01-14 23:48:15.749 [INFO][4632] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.c0b73d2953592750926f9d7f94522462b916ce3611c9c6ef42c310cd134eb3c2" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:15.784077 containerd[1695]: 2026-01-14 23:48:15.755 [INFO][4632] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.132/26] block=192.168.21.128/26 handle="k8s-pod-network.c0b73d2953592750926f9d7f94522462b916ce3611c9c6ef42c310cd134eb3c2" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:15.784077 containerd[1695]: 2026-01-14 23:48:15.756 [INFO][4632] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.132/26] handle="k8s-pod-network.c0b73d2953592750926f9d7f94522462b916ce3611c9c6ef42c310cd134eb3c2" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:15.784077 containerd[1695]: 2026-01-14 23:48:15.756 [INFO][4632] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 23:48:15.784077 containerd[1695]: 2026-01-14 23:48:15.756 [INFO][4632] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.132/26] IPv6=[] ContainerID="c0b73d2953592750926f9d7f94522462b916ce3611c9c6ef42c310cd134eb3c2" HandleID="k8s-pod-network.c0b73d2953592750926f9d7f94522462b916ce3611c9c6ef42c310cd134eb3c2" Workload="ci--4515--1--0--n--1d3be4f164-k8s-coredns--668d6bf9bc--t9xqm-eth0" Jan 14 23:48:15.784205 containerd[1695]: 2026-01-14 23:48:15.761 [INFO][4603] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c0b73d2953592750926f9d7f94522462b916ce3611c9c6ef42c310cd134eb3c2" Namespace="kube-system" Pod="coredns-668d6bf9bc-t9xqm" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-coredns--668d6bf9bc--t9xqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--1d3be4f164-k8s-coredns--668d6bf9bc--t9xqm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f47a1c23-1d14-45a5-9fef-8bb462878104", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 45, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-1d3be4f164", ContainerID:"", Pod:"coredns-668d6bf9bc-t9xqm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali30b4553f055", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:48:15.784205 containerd[1695]: 2026-01-14 23:48:15.762 [INFO][4603] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.132/32] ContainerID="c0b73d2953592750926f9d7f94522462b916ce3611c9c6ef42c310cd134eb3c2" Namespace="kube-system" Pod="coredns-668d6bf9bc-t9xqm" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-coredns--668d6bf9bc--t9xqm-eth0" Jan 14 23:48:15.784205 containerd[1695]: 2026-01-14 23:48:15.762 [INFO][4603] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali30b4553f055 ContainerID="c0b73d2953592750926f9d7f94522462b916ce3611c9c6ef42c310cd134eb3c2" Namespace="kube-system" Pod="coredns-668d6bf9bc-t9xqm" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-coredns--668d6bf9bc--t9xqm-eth0" Jan 14 23:48:15.784205 containerd[1695]: 2026-01-14 23:48:15.767 [INFO][4603] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c0b73d2953592750926f9d7f94522462b916ce3611c9c6ef42c310cd134eb3c2" Namespace="kube-system" Pod="coredns-668d6bf9bc-t9xqm" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-coredns--668d6bf9bc--t9xqm-eth0" Jan 14 23:48:15.784205 containerd[1695]: 2026-01-14 23:48:15.767 [INFO][4603] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c0b73d2953592750926f9d7f94522462b916ce3611c9c6ef42c310cd134eb3c2" Namespace="kube-system" Pod="coredns-668d6bf9bc-t9xqm" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-coredns--668d6bf9bc--t9xqm-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--1d3be4f164-k8s-coredns--668d6bf9bc--t9xqm-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"f47a1c23-1d14-45a5-9fef-8bb462878104", ResourceVersion:"1065", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 45, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-1d3be4f164", ContainerID:"c0b73d2953592750926f9d7f94522462b916ce3611c9c6ef42c310cd134eb3c2", Pod:"coredns-668d6bf9bc-t9xqm", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.132/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali30b4553f055", MAC:"8e:c2:2c:16:8d:ca", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:48:15.784205 containerd[1695]: 2026-01-14 23:48:15.780 [INFO][4603] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c0b73d2953592750926f9d7f94522462b916ce3611c9c6ef42c310cd134eb3c2" Namespace="kube-system" Pod="coredns-668d6bf9bc-t9xqm" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-coredns--668d6bf9bc--t9xqm-eth0" Jan 14 23:48:15.797000 audit[4658]: NETFILTER_CFG table=filter:131 family=2 entries=50 op=nft_register_chain pid=4658 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:48:15.797000 audit[4658]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24928 a0=3 a1=ffffd052ccb0 a2=0 a3=ffff93614fa8 items=0 ppid=4222 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:15.797000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:48:15.808186 containerd[1695]: time="2026-01-14T23:48:15.806729442Z" level=info msg="connecting to shim c0b73d2953592750926f9d7f94522462b916ce3611c9c6ef42c310cd134eb3c2" address="unix:///run/containerd/s/d59b3a3ec9bf874c57d6b5aa2b4bde629e417f04cd5546d572bd7496ca442c06" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:48:15.829478 systemd[1]: Started cri-containerd-c0b73d2953592750926f9d7f94522462b916ce3611c9c6ef42c310cd134eb3c2.scope - libcontainer container c0b73d2953592750926f9d7f94522462b916ce3611c9c6ef42c310cd134eb3c2. Jan 14 23:48:15.842000 audit: BPF prog-id=221 op=LOAD Jan 14 23:48:15.842000 audit: BPF prog-id=222 op=LOAD Jan 14 23:48:15.842000 audit[4677]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4667 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:15.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330623733643239353335393237353039323666396437663934353232 Jan 14 23:48:15.842000 audit: BPF prog-id=222 op=UNLOAD Jan 14 23:48:15.842000 audit[4677]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4667 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:15.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330623733643239353335393237353039323666396437663934353232 Jan 14 23:48:15.842000 audit: BPF prog-id=223 op=LOAD Jan 14 23:48:15.842000 audit[4677]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4667 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:15.842000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330623733643239353335393237353039323666396437663934353232 Jan 14 23:48:15.843000 audit: BPF prog-id=224 op=LOAD Jan 14 23:48:15.843000 audit[4677]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4667 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:15.843000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330623733643239353335393237353039323666396437663934353232 Jan 14 23:48:15.843000 audit: BPF prog-id=224 op=UNLOAD Jan 14 23:48:15.843000 audit[4677]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4667 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:15.843000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330623733643239353335393237353039323666396437663934353232 Jan 14 23:48:15.843000 audit: BPF prog-id=223 op=UNLOAD Jan 14 23:48:15.843000 audit[4677]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4667 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:15.843000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330623733643239353335393237353039323666396437663934353232 Jan 14 23:48:15.843000 audit: BPF prog-id=225 op=LOAD Jan 14 23:48:15.843000 audit[4677]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4667 pid=4677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:15.843000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6330623733643239353335393237353039323666396437663934353232 Jan 14 23:48:15.870928 systemd-networkd[1602]: calie1f5f00684b: Link UP Jan 14 23:48:15.871711 systemd-networkd[1602]: calie1f5f00684b: Gained carrier Jan 14 23:48:15.876752 containerd[1695]: time="2026-01-14T23:48:15.876721936Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-t9xqm,Uid:f47a1c23-1d14-45a5-9fef-8bb462878104,Namespace:kube-system,Attempt:0,} returns sandbox id \"c0b73d2953592750926f9d7f94522462b916ce3611c9c6ef42c310cd134eb3c2\"" Jan 14 23:48:15.885037 containerd[1695]: time="2026-01-14T23:48:15.884908841Z" level=info msg="CreateContainer within sandbox \"c0b73d2953592750926f9d7f94522462b916ce3611c9c6ef42c310cd134eb3c2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 23:48:15.893876 containerd[1695]: 2026-01-14 23:48:15.684 [INFO][4609] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--n--1d3be4f164-k8s-goldmane--666569f655--5sxpk-eth0 goldmane-666569f655- calico-system fcec49c5-6358-46d9-9922-8a81fb4bafd8 1070 0 2026-01-14 23:45:39 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4515-1-0-n-1d3be4f164 goldmane-666569f655-5sxpk eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie1f5f00684b [] [] }} ContainerID="b92902e85b0f6a9901475db228d18495c03e37d2f25426dec29cddaa13d3bcf9" Namespace="calico-system" Pod="goldmane-666569f655-5sxpk" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-goldmane--666569f655--5sxpk-" Jan 14 23:48:15.893876 containerd[1695]: 2026-01-14 23:48:15.685 [INFO][4609] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b92902e85b0f6a9901475db228d18495c03e37d2f25426dec29cddaa13d3bcf9" Namespace="calico-system" Pod="goldmane-666569f655-5sxpk" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-goldmane--666569f655--5sxpk-eth0" Jan 14 23:48:15.893876 containerd[1695]: 2026-01-14 23:48:15.715 [INFO][4630] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b92902e85b0f6a9901475db228d18495c03e37d2f25426dec29cddaa13d3bcf9" HandleID="k8s-pod-network.b92902e85b0f6a9901475db228d18495c03e37d2f25426dec29cddaa13d3bcf9" Workload="ci--4515--1--0--n--1d3be4f164-k8s-goldmane--666569f655--5sxpk-eth0" Jan 14 23:48:15.893876 containerd[1695]: 2026-01-14 23:48:15.715 [INFO][4630] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b92902e85b0f6a9901475db228d18495c03e37d2f25426dec29cddaa13d3bcf9" HandleID="k8s-pod-network.b92902e85b0f6a9901475db228d18495c03e37d2f25426dec29cddaa13d3bcf9" Workload="ci--4515--1--0--n--1d3be4f164-k8s-goldmane--666569f655--5sxpk-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000136450), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515-1-0-n-1d3be4f164", "pod":"goldmane-666569f655-5sxpk", "timestamp":"2026-01-14 23:48:15.715618203 +0000 UTC"}, Hostname:"ci-4515-1-0-n-1d3be4f164", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 23:48:15.893876 containerd[1695]: 2026-01-14 23:48:15.715 [INFO][4630] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 23:48:15.893876 containerd[1695]: 2026-01-14 23:48:15.756 [INFO][4630] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 23:48:15.893876 containerd[1695]: 2026-01-14 23:48:15.756 [INFO][4630] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-n-1d3be4f164' Jan 14 23:48:15.893876 containerd[1695]: 2026-01-14 23:48:15.827 [INFO][4630] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b92902e85b0f6a9901475db228d18495c03e37d2f25426dec29cddaa13d3bcf9" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:15.893876 containerd[1695]: 2026-01-14 23:48:15.834 [INFO][4630] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:15.893876 containerd[1695]: 2026-01-14 23:48:15.841 [INFO][4630] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:15.893876 containerd[1695]: 2026-01-14 23:48:15.843 [INFO][4630] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:15.893876 containerd[1695]: 2026-01-14 23:48:15.845 [INFO][4630] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:15.893876 containerd[1695]: 2026-01-14 23:48:15.846 [INFO][4630] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.b92902e85b0f6a9901475db228d18495c03e37d2f25426dec29cddaa13d3bcf9" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:15.893876 containerd[1695]: 2026-01-14 23:48:15.848 [INFO][4630] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b92902e85b0f6a9901475db228d18495c03e37d2f25426dec29cddaa13d3bcf9 Jan 14 23:48:15.893876 containerd[1695]: 2026-01-14 23:48:15.851 [INFO][4630] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.b92902e85b0f6a9901475db228d18495c03e37d2f25426dec29cddaa13d3bcf9" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:15.893876 containerd[1695]: 2026-01-14 23:48:15.863 [INFO][4630] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.133/26] block=192.168.21.128/26 handle="k8s-pod-network.b92902e85b0f6a9901475db228d18495c03e37d2f25426dec29cddaa13d3bcf9" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:15.893876 containerd[1695]: 2026-01-14 23:48:15.863 [INFO][4630] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.133/26] handle="k8s-pod-network.b92902e85b0f6a9901475db228d18495c03e37d2f25426dec29cddaa13d3bcf9" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:15.893876 containerd[1695]: 2026-01-14 23:48:15.864 [INFO][4630] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 23:48:15.893876 containerd[1695]: 2026-01-14 23:48:15.864 [INFO][4630] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.133/26] IPv6=[] ContainerID="b92902e85b0f6a9901475db228d18495c03e37d2f25426dec29cddaa13d3bcf9" HandleID="k8s-pod-network.b92902e85b0f6a9901475db228d18495c03e37d2f25426dec29cddaa13d3bcf9" Workload="ci--4515--1--0--n--1d3be4f164-k8s-goldmane--666569f655--5sxpk-eth0" Jan 14 23:48:15.894623 containerd[1695]: 2026-01-14 23:48:15.867 [INFO][4609] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b92902e85b0f6a9901475db228d18495c03e37d2f25426dec29cddaa13d3bcf9" Namespace="calico-system" Pod="goldmane-666569f655-5sxpk" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-goldmane--666569f655--5sxpk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--1d3be4f164-k8s-goldmane--666569f655--5sxpk-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"fcec49c5-6358-46d9-9922-8a81fb4bafd8", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 45, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-1d3be4f164", ContainerID:"", Pod:"goldmane-666569f655-5sxpk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.21.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie1f5f00684b", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:48:15.894623 containerd[1695]: 2026-01-14 23:48:15.867 [INFO][4609] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.133/32] ContainerID="b92902e85b0f6a9901475db228d18495c03e37d2f25426dec29cddaa13d3bcf9" Namespace="calico-system" Pod="goldmane-666569f655-5sxpk" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-goldmane--666569f655--5sxpk-eth0" Jan 14 23:48:15.894623 containerd[1695]: 2026-01-14 23:48:15.867 [INFO][4609] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie1f5f00684b ContainerID="b92902e85b0f6a9901475db228d18495c03e37d2f25426dec29cddaa13d3bcf9" Namespace="calico-system" Pod="goldmane-666569f655-5sxpk" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-goldmane--666569f655--5sxpk-eth0" Jan 14 23:48:15.894623 containerd[1695]: 2026-01-14 23:48:15.871 [INFO][4609] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b92902e85b0f6a9901475db228d18495c03e37d2f25426dec29cddaa13d3bcf9" Namespace="calico-system" Pod="goldmane-666569f655-5sxpk" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-goldmane--666569f655--5sxpk-eth0" Jan 14 23:48:15.894623 containerd[1695]: 2026-01-14 23:48:15.873 [INFO][4609] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b92902e85b0f6a9901475db228d18495c03e37d2f25426dec29cddaa13d3bcf9" Namespace="calico-system" Pod="goldmane-666569f655-5sxpk" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-goldmane--666569f655--5sxpk-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--1d3be4f164-k8s-goldmane--666569f655--5sxpk-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"fcec49c5-6358-46d9-9922-8a81fb4bafd8", ResourceVersion:"1070", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 45, 39, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-1d3be4f164", ContainerID:"b92902e85b0f6a9901475db228d18495c03e37d2f25426dec29cddaa13d3bcf9", Pod:"goldmane-666569f655-5sxpk", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.21.133/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie1f5f00684b", MAC:"0a:70:4e:55:d5:5f", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:48:15.894623 containerd[1695]: 2026-01-14 23:48:15.890 [INFO][4609] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b92902e85b0f6a9901475db228d18495c03e37d2f25426dec29cddaa13d3bcf9" Namespace="calico-system" Pod="goldmane-666569f655-5sxpk" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-goldmane--666569f655--5sxpk-eth0" Jan 14 23:48:15.898973 containerd[1695]: time="2026-01-14T23:48:15.898936443Z" level=info msg="Container a955a118d0afb8b73abb7a0eb0b08239f84453a3eb674938212d78e4f4fd7ec4: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:48:15.910290 containerd[1695]: time="2026-01-14T23:48:15.910018277Z" level=info msg="CreateContainer within sandbox \"c0b73d2953592750926f9d7f94522462b916ce3611c9c6ef42c310cd134eb3c2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"a955a118d0afb8b73abb7a0eb0b08239f84453a3eb674938212d78e4f4fd7ec4\"" Jan 14 23:48:15.910989 containerd[1695]: time="2026-01-14T23:48:15.910948960Z" level=info msg="StartContainer for \"a955a118d0afb8b73abb7a0eb0b08239f84453a3eb674938212d78e4f4fd7ec4\"" Jan 14 23:48:15.912282 containerd[1695]: time="2026-01-14T23:48:15.912226844Z" level=info msg="connecting to shim a955a118d0afb8b73abb7a0eb0b08239f84453a3eb674938212d78e4f4fd7ec4" address="unix:///run/containerd/s/d59b3a3ec9bf874c57d6b5aa2b4bde629e417f04cd5546d572bd7496ca442c06" protocol=ttrpc version=3 Jan 14 23:48:15.912000 audit[4718]: NETFILTER_CFG table=filter:132 family=2 entries=56 op=nft_register_chain pid=4718 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:48:15.912000 audit[4718]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28744 a0=3 a1=ffffcc2883f0 a2=0 a3=ffffa9023fa8 items=0 ppid=4222 pid=4718 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:15.912000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:48:15.929710 containerd[1695]: time="2026-01-14T23:48:15.929664937Z" level=info msg="connecting to shim b92902e85b0f6a9901475db228d18495c03e37d2f25426dec29cddaa13d3bcf9" address="unix:///run/containerd/s/248ab2d20c32753f1f02ec3d005b75f4203995797fdfa9bc112d7aaa483b4d00" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:48:15.933475 systemd[1]: Started cri-containerd-a955a118d0afb8b73abb7a0eb0b08239f84453a3eb674938212d78e4f4fd7ec4.scope - libcontainer container a955a118d0afb8b73abb7a0eb0b08239f84453a3eb674938212d78e4f4fd7ec4. Jan 14 23:48:15.944000 audit: BPF prog-id=226 op=LOAD Jan 14 23:48:15.945000 audit: BPF prog-id=227 op=LOAD Jan 14 23:48:15.945000 audit[4719]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4667 pid=4719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:15.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139353561313138643061666238623733616262376130656230623038 Jan 14 23:48:15.945000 audit: BPF prog-id=227 op=UNLOAD Jan 14 23:48:15.945000 audit[4719]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4667 pid=4719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:15.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139353561313138643061666238623733616262376130656230623038 Jan 14 23:48:15.945000 audit: BPF prog-id=228 op=LOAD Jan 14 23:48:15.945000 audit[4719]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4667 pid=4719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:15.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139353561313138643061666238623733616262376130656230623038 Jan 14 23:48:15.945000 audit: BPF prog-id=229 op=LOAD Jan 14 23:48:15.945000 audit[4719]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4667 pid=4719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:15.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139353561313138643061666238623733616262376130656230623038 Jan 14 23:48:15.945000 audit: BPF prog-id=229 op=UNLOAD Jan 14 23:48:15.945000 audit[4719]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4667 pid=4719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:15.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139353561313138643061666238623733616262376130656230623038 Jan 14 23:48:15.945000 audit: BPF prog-id=228 op=UNLOAD Jan 14 23:48:15.945000 audit[4719]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4667 pid=4719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:15.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139353561313138643061666238623733616262376130656230623038 Jan 14 23:48:15.945000 audit: BPF prog-id=230 op=LOAD Jan 14 23:48:15.945000 audit[4719]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4667 pid=4719 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:15.945000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6139353561313138643061666238623733616262376130656230623038 Jan 14 23:48:15.956564 systemd[1]: Started cri-containerd-b92902e85b0f6a9901475db228d18495c03e37d2f25426dec29cddaa13d3bcf9.scope - libcontainer container b92902e85b0f6a9901475db228d18495c03e37d2f25426dec29cddaa13d3bcf9. Jan 14 23:48:15.968695 containerd[1695]: time="2026-01-14T23:48:15.968655616Z" level=info msg="StartContainer for \"a955a118d0afb8b73abb7a0eb0b08239f84453a3eb674938212d78e4f4fd7ec4\" returns successfully" Jan 14 23:48:15.968000 audit: BPF prog-id=231 op=LOAD Jan 14 23:48:15.969000 audit: BPF prog-id=232 op=LOAD Jan 14 23:48:15.969000 audit[4750]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=4739 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:15.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239323930326538356230663661393930313437356462323238643138 Jan 14 23:48:15.969000 audit: BPF prog-id=232 op=UNLOAD Jan 14 23:48:15.969000 audit[4750]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4739 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:15.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239323930326538356230663661393930313437356462323238643138 Jan 14 23:48:15.969000 audit: BPF prog-id=233 op=LOAD Jan 14 23:48:15.969000 audit[4750]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=4739 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:15.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239323930326538356230663661393930313437356462323238643138 Jan 14 23:48:15.969000 audit: BPF prog-id=234 op=LOAD Jan 14 23:48:15.969000 audit[4750]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=4739 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:15.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239323930326538356230663661393930313437356462323238643138 Jan 14 23:48:15.969000 audit: BPF prog-id=234 op=UNLOAD Jan 14 23:48:15.969000 audit[4750]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4739 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:15.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239323930326538356230663661393930313437356462323238643138 Jan 14 23:48:15.969000 audit: BPF prog-id=233 op=UNLOAD Jan 14 23:48:15.969000 audit[4750]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4739 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:15.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239323930326538356230663661393930313437356462323238643138 Jan 14 23:48:15.969000 audit: BPF prog-id=235 op=LOAD Jan 14 23:48:15.969000 audit[4750]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=4739 pid=4750 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:15.969000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239323930326538356230663661393930313437356462323238643138 Jan 14 23:48:15.995717 containerd[1695]: time="2026-01-14T23:48:15.995664619Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5sxpk,Uid:fcec49c5-6358-46d9-9922-8a81fb4bafd8,Namespace:calico-system,Attempt:0,} returns sandbox id \"b92902e85b0f6a9901475db228d18495c03e37d2f25426dec29cddaa13d3bcf9\"" Jan 14 23:48:15.997452 containerd[1695]: time="2026-01-14T23:48:15.997403704Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 23:48:16.041000 audit[4800]: NETFILTER_CFG table=filter:133 family=2 entries=20 op=nft_register_rule pid=4800 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:48:16.041000 audit[4800]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd9b746c0 a2=0 a3=1 items=0 ppid=3023 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:16.041000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:48:16.045000 audit[4800]: NETFILTER_CFG table=nat:134 family=2 entries=14 op=nft_register_rule pid=4800 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:48:16.045000 audit[4800]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffd9b746c0 a2=0 a3=1 items=0 ppid=3023 pid=4800 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:16.045000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:48:16.342743 containerd[1695]: time="2026-01-14T23:48:16.342637439Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:48:16.344169 containerd[1695]: time="2026-01-14T23:48:16.344120483Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 23:48:16.344249 containerd[1695]: time="2026-01-14T23:48:16.344219084Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 23:48:16.344431 kubelet[2898]: E0114 23:48:16.344394 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:48:16.345038 kubelet[2898]: E0114 23:48:16.344444 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:48:16.345038 kubelet[2898]: E0114 23:48:16.344562 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h289k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5sxpk_calico-system(fcec49c5-6358-46d9-9922-8a81fb4bafd8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 23:48:16.346033 kubelet[2898]: E0114 23:48:16.346001 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:48:16.632568 containerd[1695]: time="2026-01-14T23:48:16.632437404Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b767987c5-2glxx,Uid:300b5f0b-ed7c-4a04-a4b8-68a71ea25297,Namespace:calico-apiserver,Attempt:0,}" Jan 14 23:48:16.742391 systemd-networkd[1602]: cali1417e3ef349: Link UP Jan 14 23:48:16.742786 systemd-networkd[1602]: cali1417e3ef349: Gained carrier Jan 14 23:48:16.756413 kubelet[2898]: I0114 23:48:16.756344 2898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-t9xqm" podStartSLOduration=172.756323703 podStartE2EDuration="2m52.756323703s" podCreationTimestamp="2026-01-14 23:45:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 23:48:16.020484975 +0000 UTC m=+177.470023569" watchObservedRunningTime="2026-01-14 23:48:16.756323703 +0000 UTC m=+178.205862257" Jan 14 23:48:16.759701 containerd[1695]: 2026-01-14 23:48:16.672 [INFO][4801] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--n--1d3be4f164-k8s-calico--apiserver--5b767987c5--2glxx-eth0 calico-apiserver-5b767987c5- calico-apiserver 300b5f0b-ed7c-4a04-a4b8-68a71ea25297 1067 0 2026-01-14 23:45:34 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:5b767987c5 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4515-1-0-n-1d3be4f164 calico-apiserver-5b767987c5-2glxx eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1417e3ef349 [] [] }} ContainerID="7f727aab5fd446c2da2aa05f84e3466e67afb1de01869552e582b6d7af5d7b40" Namespace="calico-apiserver" Pod="calico-apiserver-5b767987c5-2glxx" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-calico--apiserver--5b767987c5--2glxx-" Jan 14 23:48:16.759701 containerd[1695]: 2026-01-14 23:48:16.672 [INFO][4801] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7f727aab5fd446c2da2aa05f84e3466e67afb1de01869552e582b6d7af5d7b40" Namespace="calico-apiserver" Pod="calico-apiserver-5b767987c5-2glxx" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-calico--apiserver--5b767987c5--2glxx-eth0" Jan 14 23:48:16.759701 containerd[1695]: 2026-01-14 23:48:16.693 [INFO][4816] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7f727aab5fd446c2da2aa05f84e3466e67afb1de01869552e582b6d7af5d7b40" HandleID="k8s-pod-network.7f727aab5fd446c2da2aa05f84e3466e67afb1de01869552e582b6d7af5d7b40" Workload="ci--4515--1--0--n--1d3be4f164-k8s-calico--apiserver--5b767987c5--2glxx-eth0" Jan 14 23:48:16.759701 containerd[1695]: 2026-01-14 23:48:16.694 [INFO][4816] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7f727aab5fd446c2da2aa05f84e3466e67afb1de01869552e582b6d7af5d7b40" HandleID="k8s-pod-network.7f727aab5fd446c2da2aa05f84e3466e67afb1de01869552e582b6d7af5d7b40" Workload="ci--4515--1--0--n--1d3be4f164-k8s-calico--apiserver--5b767987c5--2glxx-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400050fe20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4515-1-0-n-1d3be4f164", "pod":"calico-apiserver-5b767987c5-2glxx", "timestamp":"2026-01-14 23:48:16.693940312 +0000 UTC"}, Hostname:"ci-4515-1-0-n-1d3be4f164", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 23:48:16.759701 containerd[1695]: 2026-01-14 23:48:16.694 [INFO][4816] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 23:48:16.759701 containerd[1695]: 2026-01-14 23:48:16.694 [INFO][4816] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 23:48:16.759701 containerd[1695]: 2026-01-14 23:48:16.694 [INFO][4816] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-n-1d3be4f164' Jan 14 23:48:16.759701 containerd[1695]: 2026-01-14 23:48:16.704 [INFO][4816] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7f727aab5fd446c2da2aa05f84e3466e67afb1de01869552e582b6d7af5d7b40" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:16.759701 containerd[1695]: 2026-01-14 23:48:16.709 [INFO][4816] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:16.759701 containerd[1695]: 2026-01-14 23:48:16.714 [INFO][4816] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:16.759701 containerd[1695]: 2026-01-14 23:48:16.716 [INFO][4816] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:16.759701 containerd[1695]: 2026-01-14 23:48:16.719 [INFO][4816] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:16.759701 containerd[1695]: 2026-01-14 23:48:16.719 [INFO][4816] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.7f727aab5fd446c2da2aa05f84e3466e67afb1de01869552e582b6d7af5d7b40" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:16.759701 containerd[1695]: 2026-01-14 23:48:16.720 [INFO][4816] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7f727aab5fd446c2da2aa05f84e3466e67afb1de01869552e582b6d7af5d7b40 Jan 14 23:48:16.759701 containerd[1695]: 2026-01-14 23:48:16.730 [INFO][4816] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.7f727aab5fd446c2da2aa05f84e3466e67afb1de01869552e582b6d7af5d7b40" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:16.759701 containerd[1695]: 2026-01-14 23:48:16.738 [INFO][4816] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.134/26] block=192.168.21.128/26 handle="k8s-pod-network.7f727aab5fd446c2da2aa05f84e3466e67afb1de01869552e582b6d7af5d7b40" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:16.759701 containerd[1695]: 2026-01-14 23:48:16.738 [INFO][4816] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.134/26] handle="k8s-pod-network.7f727aab5fd446c2da2aa05f84e3466e67afb1de01869552e582b6d7af5d7b40" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:16.759701 containerd[1695]: 2026-01-14 23:48:16.739 [INFO][4816] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 23:48:16.759701 containerd[1695]: 2026-01-14 23:48:16.739 [INFO][4816] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.134/26] IPv6=[] ContainerID="7f727aab5fd446c2da2aa05f84e3466e67afb1de01869552e582b6d7af5d7b40" HandleID="k8s-pod-network.7f727aab5fd446c2da2aa05f84e3466e67afb1de01869552e582b6d7af5d7b40" Workload="ci--4515--1--0--n--1d3be4f164-k8s-calico--apiserver--5b767987c5--2glxx-eth0" Jan 14 23:48:16.760181 containerd[1695]: 2026-01-14 23:48:16.740 [INFO][4801] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7f727aab5fd446c2da2aa05f84e3466e67afb1de01869552e582b6d7af5d7b40" Namespace="calico-apiserver" Pod="calico-apiserver-5b767987c5-2glxx" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-calico--apiserver--5b767987c5--2glxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--1d3be4f164-k8s-calico--apiserver--5b767987c5--2glxx-eth0", GenerateName:"calico-apiserver-5b767987c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"300b5f0b-ed7c-4a04-a4b8-68a71ea25297", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 45, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b767987c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-1d3be4f164", ContainerID:"", Pod:"calico-apiserver-5b767987c5-2glxx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1417e3ef349", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:48:16.760181 containerd[1695]: 2026-01-14 23:48:16.740 [INFO][4801] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.134/32] ContainerID="7f727aab5fd446c2da2aa05f84e3466e67afb1de01869552e582b6d7af5d7b40" Namespace="calico-apiserver" Pod="calico-apiserver-5b767987c5-2glxx" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-calico--apiserver--5b767987c5--2glxx-eth0" Jan 14 23:48:16.760181 containerd[1695]: 2026-01-14 23:48:16.741 [INFO][4801] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1417e3ef349 ContainerID="7f727aab5fd446c2da2aa05f84e3466e67afb1de01869552e582b6d7af5d7b40" Namespace="calico-apiserver" Pod="calico-apiserver-5b767987c5-2glxx" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-calico--apiserver--5b767987c5--2glxx-eth0" Jan 14 23:48:16.760181 containerd[1695]: 2026-01-14 23:48:16.743 [INFO][4801] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7f727aab5fd446c2da2aa05f84e3466e67afb1de01869552e582b6d7af5d7b40" Namespace="calico-apiserver" Pod="calico-apiserver-5b767987c5-2glxx" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-calico--apiserver--5b767987c5--2glxx-eth0" Jan 14 23:48:16.760181 containerd[1695]: 2026-01-14 23:48:16.743 [INFO][4801] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7f727aab5fd446c2da2aa05f84e3466e67afb1de01869552e582b6d7af5d7b40" Namespace="calico-apiserver" Pod="calico-apiserver-5b767987c5-2glxx" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-calico--apiserver--5b767987c5--2glxx-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--1d3be4f164-k8s-calico--apiserver--5b767987c5--2glxx-eth0", GenerateName:"calico-apiserver-5b767987c5-", Namespace:"calico-apiserver", SelfLink:"", UID:"300b5f0b-ed7c-4a04-a4b8-68a71ea25297", ResourceVersion:"1067", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 45, 34, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"5b767987c5", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-1d3be4f164", ContainerID:"7f727aab5fd446c2da2aa05f84e3466e67afb1de01869552e582b6d7af5d7b40", Pod:"calico-apiserver-5b767987c5-2glxx", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.21.134/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1417e3ef349", MAC:"ca:7d:26:f5:a9:bd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:48:16.760181 containerd[1695]: 2026-01-14 23:48:16.757 [INFO][4801] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7f727aab5fd446c2da2aa05f84e3466e67afb1de01869552e582b6d7af5d7b40" Namespace="calico-apiserver" Pod="calico-apiserver-5b767987c5-2glxx" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-calico--apiserver--5b767987c5--2glxx-eth0" Jan 14 23:48:16.768000 audit[4832]: NETFILTER_CFG table=filter:135 family=2 entries=59 op=nft_register_chain pid=4832 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:48:16.768000 audit[4832]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=29492 a0=3 a1=fffff98da2d0 a2=0 a3=ffffb65bffa8 items=0 ppid=4222 pid=4832 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:16.768000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:48:16.784986 containerd[1695]: time="2026-01-14T23:48:16.784938910Z" level=info msg="connecting to shim 7f727aab5fd446c2da2aa05f84e3466e67afb1de01869552e582b6d7af5d7b40" address="unix:///run/containerd/s/d27baa9f2d1dc643f0cacdf0032b24b4683fd061122b3193da82ee672efa5d65" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:48:16.810478 systemd[1]: Started cri-containerd-7f727aab5fd446c2da2aa05f84e3466e67afb1de01869552e582b6d7af5d7b40.scope - libcontainer container 7f727aab5fd446c2da2aa05f84e3466e67afb1de01869552e582b6d7af5d7b40. Jan 14 23:48:16.819000 audit: BPF prog-id=236 op=LOAD Jan 14 23:48:16.819000 audit: BPF prog-id=237 op=LOAD Jan 14 23:48:16.819000 audit[4852]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4842 pid=4852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:16.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766373237616162356664343436633264613261613035663834653334 Jan 14 23:48:16.819000 audit: BPF prog-id=237 op=UNLOAD Jan 14 23:48:16.819000 audit[4852]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4842 pid=4852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:16.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766373237616162356664343436633264613261613035663834653334 Jan 14 23:48:16.819000 audit: BPF prog-id=238 op=LOAD Jan 14 23:48:16.819000 audit[4852]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4842 pid=4852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:16.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766373237616162356664343436633264613261613035663834653334 Jan 14 23:48:16.819000 audit: BPF prog-id=239 op=LOAD Jan 14 23:48:16.819000 audit[4852]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4842 pid=4852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:16.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766373237616162356664343436633264613261613035663834653334 Jan 14 23:48:16.819000 audit: BPF prog-id=239 op=UNLOAD Jan 14 23:48:16.819000 audit[4852]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4842 pid=4852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:16.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766373237616162356664343436633264613261613035663834653334 Jan 14 23:48:16.819000 audit: BPF prog-id=238 op=UNLOAD Jan 14 23:48:16.819000 audit[4852]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4842 pid=4852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:16.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766373237616162356664343436633264613261613035663834653334 Jan 14 23:48:16.819000 audit: BPF prog-id=240 op=LOAD Jan 14 23:48:16.819000 audit[4852]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4842 pid=4852 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:16.819000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766373237616162356664343436633264613261613035663834653334 Jan 14 23:48:16.842400 containerd[1695]: time="2026-01-14T23:48:16.842354485Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-5b767987c5-2glxx,Uid:300b5f0b-ed7c-4a04-a4b8-68a71ea25297,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7f727aab5fd446c2da2aa05f84e3466e67afb1de01869552e582b6d7af5d7b40\"" Jan 14 23:48:16.845766 containerd[1695]: time="2026-01-14T23:48:16.845717856Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:48:17.012318 kubelet[2898]: E0114 23:48:17.012227 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:48:17.040000 audit[4879]: NETFILTER_CFG table=filter:136 family=2 entries=20 op=nft_register_rule pid=4879 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:48:17.040000 audit[4879]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff7585520 a2=0 a3=1 items=0 ppid=3023 pid=4879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:17.040000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:48:17.057000 audit[4879]: NETFILTER_CFG table=nat:137 family=2 entries=14 op=nft_register_rule pid=4879 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:48:17.057000 audit[4879]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=fffff7585520 a2=0 a3=1 items=0 ppid=3023 pid=4879 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:17.057000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:48:17.068000 audit[4881]: NETFILTER_CFG table=filter:138 family=2 entries=17 op=nft_register_rule pid=4881 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:48:17.068000 audit[4881]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd313a8b0 a2=0 a3=1 items=0 ppid=3023 pid=4881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:17.068000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:48:17.078000 audit[4881]: NETFILTER_CFG table=nat:139 family=2 entries=35 op=nft_register_chain pid=4881 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:48:17.078000 audit[4881]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffd313a8b0 a2=0 a3=1 items=0 ppid=3023 pid=4881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:17.078000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:48:17.084382 systemd-networkd[1602]: cali30b4553f055: Gained IPv6LL Jan 14 23:48:17.204580 containerd[1695]: time="2026-01-14T23:48:17.204428991Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:48:17.205876 containerd[1695]: time="2026-01-14T23:48:17.205835356Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:48:17.205962 containerd[1695]: time="2026-01-14T23:48:17.205911796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:48:17.206094 kubelet[2898]: E0114 23:48:17.206057 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:48:17.206143 kubelet[2898]: E0114 23:48:17.206106 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:48:17.206312 kubelet[2898]: E0114 23:48:17.206219 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m5dng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b767987c5-2glxx_calico-apiserver(300b5f0b-ed7c-4a04-a4b8-68a71ea25297): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:48:17.207597 kubelet[2898]: E0114 23:48:17.207557 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:48:17.596484 systemd-networkd[1602]: calie1f5f00684b: Gained IPv6LL Jan 14 23:48:17.632293 containerd[1695]: time="2026-01-14T23:48:17.632232258Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x227k,Uid:86940f44-aeec-4f6a-958e-6dee8b716868,Namespace:kube-system,Attempt:0,}" Jan 14 23:48:17.766386 systemd-networkd[1602]: cali30659775bb0: Link UP Jan 14 23:48:17.767405 systemd-networkd[1602]: cali30659775bb0: Gained carrier Jan 14 23:48:17.783606 containerd[1695]: 2026-01-14 23:48:17.668 [INFO][4883] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--n--1d3be4f164-k8s-coredns--668d6bf9bc--x227k-eth0 coredns-668d6bf9bc- kube-system 86940f44-aeec-4f6a-958e-6dee8b716868 1061 0 2026-01-14 23:45:24 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4515-1-0-n-1d3be4f164 coredns-668d6bf9bc-x227k eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali30659775bb0 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="240b0d246287407392919a5ac72eeda0e1f2003f305e86ca090cfe46603526d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-x227k" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-coredns--668d6bf9bc--x227k-" Jan 14 23:48:17.783606 containerd[1695]: 2026-01-14 23:48:17.668 [INFO][4883] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="240b0d246287407392919a5ac72eeda0e1f2003f305e86ca090cfe46603526d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-x227k" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-coredns--668d6bf9bc--x227k-eth0" Jan 14 23:48:17.783606 containerd[1695]: 2026-01-14 23:48:17.719 [INFO][4897] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="240b0d246287407392919a5ac72eeda0e1f2003f305e86ca090cfe46603526d3" HandleID="k8s-pod-network.240b0d246287407392919a5ac72eeda0e1f2003f305e86ca090cfe46603526d3" Workload="ci--4515--1--0--n--1d3be4f164-k8s-coredns--668d6bf9bc--x227k-eth0" Jan 14 23:48:17.783606 containerd[1695]: 2026-01-14 23:48:17.719 [INFO][4897] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="240b0d246287407392919a5ac72eeda0e1f2003f305e86ca090cfe46603526d3" HandleID="k8s-pod-network.240b0d246287407392919a5ac72eeda0e1f2003f305e86ca090cfe46603526d3" Workload="ci--4515--1--0--n--1d3be4f164-k8s-coredns--668d6bf9bc--x227k-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137b50), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4515-1-0-n-1d3be4f164", "pod":"coredns-668d6bf9bc-x227k", "timestamp":"2026-01-14 23:48:17.719033683 +0000 UTC"}, Hostname:"ci-4515-1-0-n-1d3be4f164", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 23:48:17.783606 containerd[1695]: 2026-01-14 23:48:17.719 [INFO][4897] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 23:48:17.783606 containerd[1695]: 2026-01-14 23:48:17.719 [INFO][4897] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 23:48:17.783606 containerd[1695]: 2026-01-14 23:48:17.719 [INFO][4897] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-n-1d3be4f164' Jan 14 23:48:17.783606 containerd[1695]: 2026-01-14 23:48:17.729 [INFO][4897] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.240b0d246287407392919a5ac72eeda0e1f2003f305e86ca090cfe46603526d3" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:17.783606 containerd[1695]: 2026-01-14 23:48:17.734 [INFO][4897] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:17.783606 containerd[1695]: 2026-01-14 23:48:17.738 [INFO][4897] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:17.783606 containerd[1695]: 2026-01-14 23:48:17.740 [INFO][4897] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:17.783606 containerd[1695]: 2026-01-14 23:48:17.742 [INFO][4897] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:17.783606 containerd[1695]: 2026-01-14 23:48:17.742 [INFO][4897] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.240b0d246287407392919a5ac72eeda0e1f2003f305e86ca090cfe46603526d3" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:17.783606 containerd[1695]: 2026-01-14 23:48:17.744 [INFO][4897] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.240b0d246287407392919a5ac72eeda0e1f2003f305e86ca090cfe46603526d3 Jan 14 23:48:17.783606 containerd[1695]: 2026-01-14 23:48:17.751 [INFO][4897] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.240b0d246287407392919a5ac72eeda0e1f2003f305e86ca090cfe46603526d3" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:17.783606 containerd[1695]: 2026-01-14 23:48:17.760 [INFO][4897] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.135/26] block=192.168.21.128/26 handle="k8s-pod-network.240b0d246287407392919a5ac72eeda0e1f2003f305e86ca090cfe46603526d3" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:17.783606 containerd[1695]: 2026-01-14 23:48:17.760 [INFO][4897] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.135/26] handle="k8s-pod-network.240b0d246287407392919a5ac72eeda0e1f2003f305e86ca090cfe46603526d3" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:17.783606 containerd[1695]: 2026-01-14 23:48:17.760 [INFO][4897] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 23:48:17.783606 containerd[1695]: 2026-01-14 23:48:17.760 [INFO][4897] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.135/26] IPv6=[] ContainerID="240b0d246287407392919a5ac72eeda0e1f2003f305e86ca090cfe46603526d3" HandleID="k8s-pod-network.240b0d246287407392919a5ac72eeda0e1f2003f305e86ca090cfe46603526d3" Workload="ci--4515--1--0--n--1d3be4f164-k8s-coredns--668d6bf9bc--x227k-eth0" Jan 14 23:48:17.784313 containerd[1695]: 2026-01-14 23:48:17.762 [INFO][4883] cni-plugin/k8s.go 418: Populated endpoint ContainerID="240b0d246287407392919a5ac72eeda0e1f2003f305e86ca090cfe46603526d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-x227k" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-coredns--668d6bf9bc--x227k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--1d3be4f164-k8s-coredns--668d6bf9bc--x227k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"86940f44-aeec-4f6a-958e-6dee8b716868", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 45, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-1d3be4f164", ContainerID:"", Pod:"coredns-668d6bf9bc-x227k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali30659775bb0", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:48:17.784313 containerd[1695]: 2026-01-14 23:48:17.762 [INFO][4883] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.135/32] ContainerID="240b0d246287407392919a5ac72eeda0e1f2003f305e86ca090cfe46603526d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-x227k" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-coredns--668d6bf9bc--x227k-eth0" Jan 14 23:48:17.784313 containerd[1695]: 2026-01-14 23:48:17.762 [INFO][4883] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali30659775bb0 ContainerID="240b0d246287407392919a5ac72eeda0e1f2003f305e86ca090cfe46603526d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-x227k" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-coredns--668d6bf9bc--x227k-eth0" Jan 14 23:48:17.784313 containerd[1695]: 2026-01-14 23:48:17.766 [INFO][4883] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="240b0d246287407392919a5ac72eeda0e1f2003f305e86ca090cfe46603526d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-x227k" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-coredns--668d6bf9bc--x227k-eth0" Jan 14 23:48:17.784313 containerd[1695]: 2026-01-14 23:48:17.767 [INFO][4883] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="240b0d246287407392919a5ac72eeda0e1f2003f305e86ca090cfe46603526d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-x227k" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-coredns--668d6bf9bc--x227k-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--1d3be4f164-k8s-coredns--668d6bf9bc--x227k-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"86940f44-aeec-4f6a-958e-6dee8b716868", ResourceVersion:"1061", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 45, 24, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-1d3be4f164", ContainerID:"240b0d246287407392919a5ac72eeda0e1f2003f305e86ca090cfe46603526d3", Pod:"coredns-668d6bf9bc-x227k", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.21.135/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali30659775bb0", MAC:"6e:0c:8b:cf:b8:3d", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:48:17.784313 containerd[1695]: 2026-01-14 23:48:17.781 [INFO][4883] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="240b0d246287407392919a5ac72eeda0e1f2003f305e86ca090cfe46603526d3" Namespace="kube-system" Pod="coredns-668d6bf9bc-x227k" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-coredns--668d6bf9bc--x227k-eth0" Jan 14 23:48:17.795000 audit[4914]: NETFILTER_CFG table=filter:140 family=2 entries=54 op=nft_register_chain pid=4914 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:48:17.797296 kernel: kauditd_printk_skb: 161 callbacks suppressed Jan 14 23:48:17.797358 kernel: audit: type=1325 audit(1768434497.795:717): table=filter:140 family=2 entries=54 op=nft_register_chain pid=4914 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:48:17.795000 audit[4914]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=25556 a0=3 a1=ffffc78547e0 a2=0 a3=ffffa6fe4fa8 items=0 ppid=4222 pid=4914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:17.804870 kernel: audit: type=1300 audit(1768434497.795:717): arch=c00000b7 syscall=211 success=yes exit=25556 a0=3 a1=ffffc78547e0 a2=0 a3=ffffa6fe4fa8 items=0 ppid=4222 pid=4914 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:17.804938 kernel: audit: type=1327 audit(1768434497.795:717): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:48:17.795000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:48:17.811844 containerd[1695]: time="2026-01-14T23:48:17.811791727Z" level=info msg="connecting to shim 240b0d246287407392919a5ac72eeda0e1f2003f305e86ca090cfe46603526d3" address="unix:///run/containerd/s/a20754d8f60954f861e0f85dd44917e75c9b200bdf775a48bfecab30d4802ec2" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:48:17.836481 systemd[1]: Started cri-containerd-240b0d246287407392919a5ac72eeda0e1f2003f305e86ca090cfe46603526d3.scope - libcontainer container 240b0d246287407392919a5ac72eeda0e1f2003f305e86ca090cfe46603526d3. Jan 14 23:48:17.845000 audit: BPF prog-id=241 op=LOAD Jan 14 23:48:17.846000 audit: BPF prog-id=242 op=LOAD Jan 14 23:48:17.848935 kernel: audit: type=1334 audit(1768434497.845:718): prog-id=241 op=LOAD Jan 14 23:48:17.848985 kernel: audit: type=1334 audit(1768434497.846:719): prog-id=242 op=LOAD Jan 14 23:48:17.849008 kernel: audit: type=1300 audit(1768434497.846:719): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=4923 pid=4934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:17.846000 audit[4934]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=4923 pid=4934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:17.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234306230643234363238373430373339323931396135616337326565 Jan 14 23:48:17.854941 kernel: audit: type=1327 audit(1768434497.846:719): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234306230643234363238373430373339323931396135616337326565 Jan 14 23:48:17.846000 audit: BPF prog-id=242 op=UNLOAD Jan 14 23:48:17.855808 kernel: audit: type=1334 audit(1768434497.846:720): prog-id=242 op=UNLOAD Jan 14 23:48:17.855833 kernel: audit: type=1300 audit(1768434497.846:720): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4923 pid=4934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:17.846000 audit[4934]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4923 pid=4934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:17.846000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234306230643234363238373430373339323931396135616337326565 Jan 14 23:48:17.861640 kernel: audit: type=1327 audit(1768434497.846:720): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234306230643234363238373430373339323931396135616337326565 Jan 14 23:48:17.847000 audit: BPF prog-id=243 op=LOAD Jan 14 23:48:17.847000 audit[4934]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=4923 pid=4934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:17.847000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234306230643234363238373430373339323931396135616337326565 Jan 14 23:48:17.851000 audit: BPF prog-id=244 op=LOAD Jan 14 23:48:17.851000 audit[4934]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=4923 pid=4934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:17.851000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234306230643234363238373430373339323931396135616337326565 Jan 14 23:48:17.853000 audit: BPF prog-id=244 op=UNLOAD Jan 14 23:48:17.853000 audit[4934]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4923 pid=4934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:17.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234306230643234363238373430373339323931396135616337326565 Jan 14 23:48:17.853000 audit: BPF prog-id=243 op=UNLOAD Jan 14 23:48:17.853000 audit[4934]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4923 pid=4934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:17.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234306230643234363238373430373339323931396135616337326565 Jan 14 23:48:17.853000 audit: BPF prog-id=245 op=LOAD Jan 14 23:48:17.853000 audit[4934]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=4923 pid=4934 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:17.853000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234306230643234363238373430373339323931396135616337326565 Jan 14 23:48:17.881023 containerd[1695]: time="2026-01-14T23:48:17.880976618Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-x227k,Uid:86940f44-aeec-4f6a-958e-6dee8b716868,Namespace:kube-system,Attempt:0,} returns sandbox id \"240b0d246287407392919a5ac72eeda0e1f2003f305e86ca090cfe46603526d3\"" Jan 14 23:48:17.884118 containerd[1695]: time="2026-01-14T23:48:17.884067428Z" level=info msg="CreateContainer within sandbox \"240b0d246287407392919a5ac72eeda0e1f2003f305e86ca090cfe46603526d3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 14 23:48:17.894317 containerd[1695]: time="2026-01-14T23:48:17.893407176Z" level=info msg="Container bb5544d0ae98f7ee4817de3db16d8dd906c8e67b204082873117caa1a94e5c6e: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:48:17.900424 containerd[1695]: time="2026-01-14T23:48:17.900376477Z" level=info msg="CreateContainer within sandbox \"240b0d246287407392919a5ac72eeda0e1f2003f305e86ca090cfe46603526d3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"bb5544d0ae98f7ee4817de3db16d8dd906c8e67b204082873117caa1a94e5c6e\"" Jan 14 23:48:17.901739 containerd[1695]: time="2026-01-14T23:48:17.901483081Z" level=info msg="StartContainer for \"bb5544d0ae98f7ee4817de3db16d8dd906c8e67b204082873117caa1a94e5c6e\"" Jan 14 23:48:17.902467 containerd[1695]: time="2026-01-14T23:48:17.902440204Z" level=info msg="connecting to shim bb5544d0ae98f7ee4817de3db16d8dd906c8e67b204082873117caa1a94e5c6e" address="unix:///run/containerd/s/a20754d8f60954f861e0f85dd44917e75c9b200bdf775a48bfecab30d4802ec2" protocol=ttrpc version=3 Jan 14 23:48:17.921444 systemd[1]: Started cri-containerd-bb5544d0ae98f7ee4817de3db16d8dd906c8e67b204082873117caa1a94e5c6e.scope - libcontainer container bb5544d0ae98f7ee4817de3db16d8dd906c8e67b204082873117caa1a94e5c6e. Jan 14 23:48:17.929000 audit: BPF prog-id=246 op=LOAD Jan 14 23:48:17.930000 audit: BPF prog-id=247 op=LOAD Jan 14 23:48:17.930000 audit[4960]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=4923 pid=4960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:17.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262353534346430616539386637656534383137646533646231366438 Jan 14 23:48:17.930000 audit: BPF prog-id=247 op=UNLOAD Jan 14 23:48:17.930000 audit[4960]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4923 pid=4960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:17.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262353534346430616539386637656534383137646533646231366438 Jan 14 23:48:17.930000 audit: BPF prog-id=248 op=LOAD Jan 14 23:48:17.930000 audit[4960]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=4923 pid=4960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:17.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262353534346430616539386637656534383137646533646231366438 Jan 14 23:48:17.930000 audit: BPF prog-id=249 op=LOAD Jan 14 23:48:17.930000 audit[4960]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=4923 pid=4960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:17.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262353534346430616539386637656534383137646533646231366438 Jan 14 23:48:17.930000 audit: BPF prog-id=249 op=UNLOAD Jan 14 23:48:17.930000 audit[4960]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4923 pid=4960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:17.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262353534346430616539386637656534383137646533646231366438 Jan 14 23:48:17.930000 audit: BPF prog-id=248 op=UNLOAD Jan 14 23:48:17.930000 audit[4960]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4923 pid=4960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:17.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262353534346430616539386637656534383137646533646231366438 Jan 14 23:48:17.930000 audit: BPF prog-id=250 op=LOAD Jan 14 23:48:17.930000 audit[4960]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=4923 pid=4960 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:17.930000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6262353534346430616539386637656534383137646533646231366438 Jan 14 23:48:17.947044 containerd[1695]: time="2026-01-14T23:48:17.947008380Z" level=info msg="StartContainer for \"bb5544d0ae98f7ee4817de3db16d8dd906c8e67b204082873117caa1a94e5c6e\" returns successfully" Jan 14 23:48:18.015150 kubelet[2898]: E0114 23:48:18.015072 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:48:18.039000 audit[4994]: NETFILTER_CFG table=filter:141 family=2 entries=14 op=nft_register_rule pid=4994 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:48:18.039000 audit[4994]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffffb2bb190 a2=0 a3=1 items=0 ppid=3023 pid=4994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:18.039000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:48:18.046528 kubelet[2898]: I0114 23:48:18.045622 2898 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-x227k" podStartSLOduration=174.045583201 podStartE2EDuration="2m54.045583201s" podCreationTimestamp="2026-01-14 23:45:24 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-14 23:48:18.044927399 +0000 UTC m=+179.494465913" watchObservedRunningTime="2026-01-14 23:48:18.045583201 +0000 UTC m=+179.495121755" Jan 14 23:48:18.045000 audit[4994]: NETFILTER_CFG table=nat:142 family=2 entries=20 op=nft_register_rule pid=4994 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:48:18.045000 audit[4994]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffffb2bb190 a2=0 a3=1 items=0 ppid=3023 pid=4994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:18.045000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:48:18.428565 systemd-networkd[1602]: cali1417e3ef349: Gained IPv6LL Jan 14 23:48:18.632477 containerd[1695]: time="2026-01-14T23:48:18.632424154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cd9b5689c-544p6,Uid:2d307ca4-cd62-4987-b2dc-ed6b76a2794e,Namespace:calico-system,Attempt:0,}" Jan 14 23:48:18.740232 systemd-networkd[1602]: cali924d03e30f5: Link UP Jan 14 23:48:18.740671 systemd-networkd[1602]: cali924d03e30f5: Gained carrier Jan 14 23:48:18.755372 containerd[1695]: 2026-01-14 23:48:18.669 [INFO][4998] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4515--1--0--n--1d3be4f164-k8s-calico--kube--controllers--7cd9b5689c--544p6-eth0 calico-kube-controllers-7cd9b5689c- calico-system 2d307ca4-cd62-4987-b2dc-ed6b76a2794e 1069 0 2026-01-14 23:45:42 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7cd9b5689c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4515-1-0-n-1d3be4f164 calico-kube-controllers-7cd9b5689c-544p6 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali924d03e30f5 [] [] }} ContainerID="95ea2483ffc7cb4cea482fadf5fc213aab2e5655407000427719fd3536ee7adc" Namespace="calico-system" Pod="calico-kube-controllers-7cd9b5689c-544p6" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-calico--kube--controllers--7cd9b5689c--544p6-" Jan 14 23:48:18.755372 containerd[1695]: 2026-01-14 23:48:18.669 [INFO][4998] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="95ea2483ffc7cb4cea482fadf5fc213aab2e5655407000427719fd3536ee7adc" Namespace="calico-system" Pod="calico-kube-controllers-7cd9b5689c-544p6" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-calico--kube--controllers--7cd9b5689c--544p6-eth0" Jan 14 23:48:18.755372 containerd[1695]: 2026-01-14 23:48:18.692 [INFO][5012] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="95ea2483ffc7cb4cea482fadf5fc213aab2e5655407000427719fd3536ee7adc" HandleID="k8s-pod-network.95ea2483ffc7cb4cea482fadf5fc213aab2e5655407000427719fd3536ee7adc" Workload="ci--4515--1--0--n--1d3be4f164-k8s-calico--kube--controllers--7cd9b5689c--544p6-eth0" Jan 14 23:48:18.755372 containerd[1695]: 2026-01-14 23:48:18.693 [INFO][5012] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="95ea2483ffc7cb4cea482fadf5fc213aab2e5655407000427719fd3536ee7adc" HandleID="k8s-pod-network.95ea2483ffc7cb4cea482fadf5fc213aab2e5655407000427719fd3536ee7adc" Workload="ci--4515--1--0--n--1d3be4f164-k8s-calico--kube--controllers--7cd9b5689c--544p6-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004c790), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4515-1-0-n-1d3be4f164", "pod":"calico-kube-controllers-7cd9b5689c-544p6", "timestamp":"2026-01-14 23:48:18.692966099 +0000 UTC"}, Hostname:"ci-4515-1-0-n-1d3be4f164", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 14 23:48:18.755372 containerd[1695]: 2026-01-14 23:48:18.693 [INFO][5012] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 14 23:48:18.755372 containerd[1695]: 2026-01-14 23:48:18.693 [INFO][5012] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 14 23:48:18.755372 containerd[1695]: 2026-01-14 23:48:18.693 [INFO][5012] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4515-1-0-n-1d3be4f164' Jan 14 23:48:18.755372 containerd[1695]: 2026-01-14 23:48:18.704 [INFO][5012] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.95ea2483ffc7cb4cea482fadf5fc213aab2e5655407000427719fd3536ee7adc" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:18.755372 containerd[1695]: 2026-01-14 23:48:18.709 [INFO][5012] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:18.755372 containerd[1695]: 2026-01-14 23:48:18.713 [INFO][5012] ipam/ipam.go 511: Trying affinity for 192.168.21.128/26 host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:18.755372 containerd[1695]: 2026-01-14 23:48:18.715 [INFO][5012] ipam/ipam.go 158: Attempting to load block cidr=192.168.21.128/26 host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:18.755372 containerd[1695]: 2026-01-14 23:48:18.718 [INFO][5012] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.21.128/26 host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:18.755372 containerd[1695]: 2026-01-14 23:48:18.718 [INFO][5012] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.21.128/26 handle="k8s-pod-network.95ea2483ffc7cb4cea482fadf5fc213aab2e5655407000427719fd3536ee7adc" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:18.755372 containerd[1695]: 2026-01-14 23:48:18.720 [INFO][5012] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.95ea2483ffc7cb4cea482fadf5fc213aab2e5655407000427719fd3536ee7adc Jan 14 23:48:18.755372 containerd[1695]: 2026-01-14 23:48:18.725 [INFO][5012] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.21.128/26 handle="k8s-pod-network.95ea2483ffc7cb4cea482fadf5fc213aab2e5655407000427719fd3536ee7adc" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:18.755372 containerd[1695]: 2026-01-14 23:48:18.736 [INFO][5012] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.21.136/26] block=192.168.21.128/26 handle="k8s-pod-network.95ea2483ffc7cb4cea482fadf5fc213aab2e5655407000427719fd3536ee7adc" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:18.755372 containerd[1695]: 2026-01-14 23:48:18.736 [INFO][5012] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.21.136/26] handle="k8s-pod-network.95ea2483ffc7cb4cea482fadf5fc213aab2e5655407000427719fd3536ee7adc" host="ci-4515-1-0-n-1d3be4f164" Jan 14 23:48:18.755372 containerd[1695]: 2026-01-14 23:48:18.736 [INFO][5012] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 14 23:48:18.755372 containerd[1695]: 2026-01-14 23:48:18.736 [INFO][5012] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.21.136/26] IPv6=[] ContainerID="95ea2483ffc7cb4cea482fadf5fc213aab2e5655407000427719fd3536ee7adc" HandleID="k8s-pod-network.95ea2483ffc7cb4cea482fadf5fc213aab2e5655407000427719fd3536ee7adc" Workload="ci--4515--1--0--n--1d3be4f164-k8s-calico--kube--controllers--7cd9b5689c--544p6-eth0" Jan 14 23:48:18.755860 containerd[1695]: 2026-01-14 23:48:18.738 [INFO][4998] cni-plugin/k8s.go 418: Populated endpoint ContainerID="95ea2483ffc7cb4cea482fadf5fc213aab2e5655407000427719fd3536ee7adc" Namespace="calico-system" Pod="calico-kube-controllers-7cd9b5689c-544p6" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-calico--kube--controllers--7cd9b5689c--544p6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--1d3be4f164-k8s-calico--kube--controllers--7cd9b5689c--544p6-eth0", GenerateName:"calico-kube-controllers-7cd9b5689c-", Namespace:"calico-system", SelfLink:"", UID:"2d307ca4-cd62-4987-b2dc-ed6b76a2794e", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 45, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cd9b5689c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-1d3be4f164", ContainerID:"", Pod:"calico-kube-controllers-7cd9b5689c-544p6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.21.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali924d03e30f5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:48:18.755860 containerd[1695]: 2026-01-14 23:48:18.738 [INFO][4998] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.21.136/32] ContainerID="95ea2483ffc7cb4cea482fadf5fc213aab2e5655407000427719fd3536ee7adc" Namespace="calico-system" Pod="calico-kube-controllers-7cd9b5689c-544p6" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-calico--kube--controllers--7cd9b5689c--544p6-eth0" Jan 14 23:48:18.755860 containerd[1695]: 2026-01-14 23:48:18.739 [INFO][4998] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali924d03e30f5 ContainerID="95ea2483ffc7cb4cea482fadf5fc213aab2e5655407000427719fd3536ee7adc" Namespace="calico-system" Pod="calico-kube-controllers-7cd9b5689c-544p6" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-calico--kube--controllers--7cd9b5689c--544p6-eth0" Jan 14 23:48:18.755860 containerd[1695]: 2026-01-14 23:48:18.740 [INFO][4998] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="95ea2483ffc7cb4cea482fadf5fc213aab2e5655407000427719fd3536ee7adc" Namespace="calico-system" Pod="calico-kube-controllers-7cd9b5689c-544p6" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-calico--kube--controllers--7cd9b5689c--544p6-eth0" Jan 14 23:48:18.755860 containerd[1695]: 2026-01-14 23:48:18.741 [INFO][4998] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="95ea2483ffc7cb4cea482fadf5fc213aab2e5655407000427719fd3536ee7adc" Namespace="calico-system" Pod="calico-kube-controllers-7cd9b5689c-544p6" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-calico--kube--controllers--7cd9b5689c--544p6-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4515--1--0--n--1d3be4f164-k8s-calico--kube--controllers--7cd9b5689c--544p6-eth0", GenerateName:"calico-kube-controllers-7cd9b5689c-", Namespace:"calico-system", SelfLink:"", UID:"2d307ca4-cd62-4987-b2dc-ed6b76a2794e", ResourceVersion:"1069", Generation:0, CreationTimestamp:time.Date(2026, time.January, 14, 23, 45, 42, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7cd9b5689c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4515-1-0-n-1d3be4f164", ContainerID:"95ea2483ffc7cb4cea482fadf5fc213aab2e5655407000427719fd3536ee7adc", Pod:"calico-kube-controllers-7cd9b5689c-544p6", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.21.136/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali924d03e30f5", MAC:"86:84:43:05:57:dd", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 14 23:48:18.755860 containerd[1695]: 2026-01-14 23:48:18.753 [INFO][4998] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="95ea2483ffc7cb4cea482fadf5fc213aab2e5655407000427719fd3536ee7adc" Namespace="calico-system" Pod="calico-kube-controllers-7cd9b5689c-544p6" WorkloadEndpoint="ci--4515--1--0--n--1d3be4f164-k8s-calico--kube--controllers--7cd9b5689c--544p6-eth0" Jan 14 23:48:18.764000 audit[5029]: NETFILTER_CFG table=filter:143 family=2 entries=52 op=nft_register_chain pid=5029 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 14 23:48:18.764000 audit[5029]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24296 a0=3 a1=ffffcb7ab750 a2=0 a3=ffffb8864fa8 items=0 ppid=4222 pid=5029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:18.764000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 14 23:48:18.778371 containerd[1695]: time="2026-01-14T23:48:18.778288119Z" level=info msg="connecting to shim 95ea2483ffc7cb4cea482fadf5fc213aab2e5655407000427719fd3536ee7adc" address="unix:///run/containerd/s/3debddfdad2cf0486e9aa95b3a0eee005c664327dab991cf7935d3ecdbd968b2" namespace=k8s.io protocol=ttrpc version=3 Jan 14 23:48:18.803507 systemd[1]: Started cri-containerd-95ea2483ffc7cb4cea482fadf5fc213aab2e5655407000427719fd3536ee7adc.scope - libcontainer container 95ea2483ffc7cb4cea482fadf5fc213aab2e5655407000427719fd3536ee7adc. Jan 14 23:48:18.815000 audit: BPF prog-id=251 op=LOAD Jan 14 23:48:18.815000 audit: BPF prog-id=252 op=LOAD Jan 14 23:48:18.815000 audit[5049]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=5038 pid=5049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:18.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935656132343833666663376362346365613438326661646635666332 Jan 14 23:48:18.815000 audit: BPF prog-id=252 op=UNLOAD Jan 14 23:48:18.815000 audit[5049]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5038 pid=5049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:18.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935656132343833666663376362346365613438326661646635666332 Jan 14 23:48:18.815000 audit: BPF prog-id=253 op=LOAD Jan 14 23:48:18.815000 audit[5049]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=5038 pid=5049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:18.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935656132343833666663376362346365613438326661646635666332 Jan 14 23:48:18.815000 audit: BPF prog-id=254 op=LOAD Jan 14 23:48:18.815000 audit[5049]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=5038 pid=5049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:18.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935656132343833666663376362346365613438326661646635666332 Jan 14 23:48:18.815000 audit: BPF prog-id=254 op=UNLOAD Jan 14 23:48:18.815000 audit[5049]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5038 pid=5049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:18.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935656132343833666663376362346365613438326661646635666332 Jan 14 23:48:18.815000 audit: BPF prog-id=253 op=UNLOAD Jan 14 23:48:18.815000 audit[5049]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5038 pid=5049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:18.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935656132343833666663376362346365613438326661646635666332 Jan 14 23:48:18.815000 audit: BPF prog-id=255 op=LOAD Jan 14 23:48:18.815000 audit[5049]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=5038 pid=5049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:18.815000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3935656132343833666663376362346365613438326661646635666332 Jan 14 23:48:18.836610 containerd[1695]: time="2026-01-14T23:48:18.836567057Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7cd9b5689c-544p6,Uid:2d307ca4-cd62-4987-b2dc-ed6b76a2794e,Namespace:calico-system,Attempt:0,} returns sandbox id \"95ea2483ffc7cb4cea482fadf5fc213aab2e5655407000427719fd3536ee7adc\"" Jan 14 23:48:18.838526 containerd[1695]: time="2026-01-14T23:48:18.838468623Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 23:48:18.877069 systemd-networkd[1602]: cali30659775bb0: Gained IPv6LL Jan 14 23:48:19.042000 audit[5079]: NETFILTER_CFG table=filter:144 family=2 entries=14 op=nft_register_rule pid=5079 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:48:19.042000 audit[5079]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffc04d6290 a2=0 a3=1 items=0 ppid=3023 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:19.042000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:48:19.056000 audit[5079]: NETFILTER_CFG table=nat:145 family=2 entries=56 op=nft_register_chain pid=5079 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 14 23:48:19.056000 audit[5079]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffc04d6290 a2=0 a3=1 items=0 ppid=3023 pid=5079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:48:19.056000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 14 23:48:19.169060 containerd[1695]: time="2026-01-14T23:48:19.168976953Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:48:19.170668 containerd[1695]: time="2026-01-14T23:48:19.170622318Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 23:48:19.170765 containerd[1695]: time="2026-01-14T23:48:19.170703518Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 23:48:19.171068 kubelet[2898]: E0114 23:48:19.170809 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:48:19.171068 kubelet[2898]: E0114 23:48:19.170852 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:48:19.171068 kubelet[2898]: E0114 23:48:19.170967 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gb4q4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7cd9b5689c-544p6_calico-system(2d307ca4-cd62-4987-b2dc-ed6b76a2794e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 23:48:19.172227 kubelet[2898]: E0114 23:48:19.172175 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:48:20.020333 kubelet[2898]: E0114 23:48:20.019877 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:48:20.348503 systemd-networkd[1602]: cali924d03e30f5: Gained IPv6LL Jan 14 23:48:20.633048 containerd[1695]: time="2026-01-14T23:48:20.632945065Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 23:48:20.984404 containerd[1695]: time="2026-01-14T23:48:20.984198498Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:48:20.985933 containerd[1695]: time="2026-01-14T23:48:20.985886263Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 23:48:20.986009 containerd[1695]: time="2026-01-14T23:48:20.985974863Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 23:48:20.986177 kubelet[2898]: E0114 23:48:20.986142 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:48:20.986457 kubelet[2898]: E0114 23:48:20.986187 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:48:20.986457 kubelet[2898]: E0114 23:48:20.986321 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c6dcdc7d9611441ca8bf87758bc85c38,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x7cql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f94899ccb-pnwbr_calico-system(63f0b6ec-9977-4e0c-b6a6-80408e82ee47): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 23:48:20.988260 containerd[1695]: time="2026-01-14T23:48:20.988218030Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 23:48:21.312612 containerd[1695]: time="2026-01-14T23:48:21.312504541Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:48:21.314230 containerd[1695]: time="2026-01-14T23:48:21.314190186Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 23:48:21.314330 containerd[1695]: time="2026-01-14T23:48:21.314209706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 23:48:21.314630 kubelet[2898]: E0114 23:48:21.314430 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:48:21.314630 kubelet[2898]: E0114 23:48:21.314475 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:48:21.314630 kubelet[2898]: E0114 23:48:21.314583 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x7cql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f94899ccb-pnwbr_calico-system(63f0b6ec-9977-4e0c-b6a6-80408e82ee47): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 23:48:21.315784 kubelet[2898]: E0114 23:48:21.315739 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:48:25.632662 containerd[1695]: time="2026-01-14T23:48:25.632593298Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 23:48:26.143108 containerd[1695]: time="2026-01-14T23:48:26.143053817Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:48:26.144160 containerd[1695]: time="2026-01-14T23:48:26.144125900Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 23:48:26.144230 containerd[1695]: time="2026-01-14T23:48:26.144184940Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 23:48:26.144419 kubelet[2898]: E0114 23:48:26.144381 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:48:26.144786 kubelet[2898]: E0114 23:48:26.144431 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:48:26.144786 kubelet[2898]: E0114 23:48:26.144555 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wx4j2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2lqxs_calico-system(5c454d6a-8fe3-46dd-a39b-d216b7be481d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 23:48:26.146697 containerd[1695]: time="2026-01-14T23:48:26.146670428Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 23:48:26.479422 containerd[1695]: time="2026-01-14T23:48:26.479297444Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:48:26.482028 containerd[1695]: time="2026-01-14T23:48:26.481794572Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 23:48:26.482156 containerd[1695]: time="2026-01-14T23:48:26.481860572Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 23:48:26.482230 kubelet[2898]: E0114 23:48:26.482189 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:48:26.482289 kubelet[2898]: E0114 23:48:26.482243 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:48:26.482415 kubelet[2898]: E0114 23:48:26.482374 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wx4j2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2lqxs_calico-system(5c454d6a-8fe3-46dd-a39b-d216b7be481d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 23:48:26.484381 kubelet[2898]: E0114 23:48:26.484336 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:48:28.634287 containerd[1695]: time="2026-01-14T23:48:28.633037983Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:48:28.965357 containerd[1695]: time="2026-01-14T23:48:28.965045118Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:48:28.966294 containerd[1695]: time="2026-01-14T23:48:28.966244041Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:48:28.966418 containerd[1695]: time="2026-01-14T23:48:28.966288561Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:48:28.966471 kubelet[2898]: E0114 23:48:28.966421 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:48:28.966471 kubelet[2898]: E0114 23:48:28.966460 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:48:28.967262 kubelet[2898]: E0114 23:48:28.966648 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tfj59,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b767987c5-49kdx_calico-apiserver(5eca9ff5-ed57-4795-b82c-c2e2b81c8474): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:48:28.967561 containerd[1695]: time="2026-01-14T23:48:28.967116084Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 23:48:28.968284 kubelet[2898]: E0114 23:48:28.968224 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:48:29.299677 containerd[1695]: time="2026-01-14T23:48:29.299571619Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:48:29.301123 containerd[1695]: time="2026-01-14T23:48:29.301073784Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 23:48:29.301290 containerd[1695]: time="2026-01-14T23:48:29.301119024Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 23:48:29.301371 kubelet[2898]: E0114 23:48:29.301339 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:48:29.301475 kubelet[2898]: E0114 23:48:29.301459 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:48:29.301685 kubelet[2898]: E0114 23:48:29.301637 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h289k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5sxpk_calico-system(fcec49c5-6358-46d9-9922-8a81fb4bafd8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 23:48:29.302901 kubelet[2898]: E0114 23:48:29.302873 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:48:31.634105 containerd[1695]: time="2026-01-14T23:48:31.634065751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:48:31.979943 containerd[1695]: time="2026-01-14T23:48:31.979724367Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:48:31.981227 containerd[1695]: time="2026-01-14T23:48:31.981169531Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:48:31.981357 containerd[1695]: time="2026-01-14T23:48:31.981253531Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:48:31.981449 kubelet[2898]: E0114 23:48:31.981392 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:48:31.981449 kubelet[2898]: E0114 23:48:31.981445 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:48:31.981863 kubelet[2898]: E0114 23:48:31.981565 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m5dng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b767987c5-2glxx_calico-apiserver(300b5f0b-ed7c-4a04-a4b8-68a71ea25297): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:48:31.983208 kubelet[2898]: E0114 23:48:31.983030 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:48:32.632539 kubelet[2898]: E0114 23:48:32.632424 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:48:34.632912 containerd[1695]: time="2026-01-14T23:48:34.632732591Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 23:48:34.972697 containerd[1695]: time="2026-01-14T23:48:34.972414949Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:48:34.974734 containerd[1695]: time="2026-01-14T23:48:34.974571755Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 23:48:34.974954 containerd[1695]: time="2026-01-14T23:48:34.974865996Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 23:48:34.975324 kubelet[2898]: E0114 23:48:34.975247 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:48:34.975615 kubelet[2898]: E0114 23:48:34.975341 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:48:34.975615 kubelet[2898]: E0114 23:48:34.975466 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gb4q4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7cd9b5689c-544p6_calico-system(2d307ca4-cd62-4987-b2dc-ed6b76a2794e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 23:48:34.976968 kubelet[2898]: E0114 23:48:34.976931 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:48:40.634958 kubelet[2898]: E0114 23:48:40.634702 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:48:41.632681 kubelet[2898]: E0114 23:48:41.632636 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:48:41.634535 kubelet[2898]: E0114 23:48:41.634453 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:48:43.633986 kubelet[2898]: E0114 23:48:43.633941 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:48:44.636693 containerd[1695]: time="2026-01-14T23:48:44.636650831Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 23:48:44.971253 containerd[1695]: time="2026-01-14T23:48:44.971110693Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:48:44.974096 containerd[1695]: time="2026-01-14T23:48:44.974030462Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 23:48:44.974210 containerd[1695]: time="2026-01-14T23:48:44.974130302Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 23:48:44.974369 kubelet[2898]: E0114 23:48:44.974329 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:48:44.974631 kubelet[2898]: E0114 23:48:44.974379 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:48:44.974631 kubelet[2898]: E0114 23:48:44.974487 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c6dcdc7d9611441ca8bf87758bc85c38,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x7cql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f94899ccb-pnwbr_calico-system(63f0b6ec-9977-4e0c-b6a6-80408e82ee47): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 23:48:44.978321 containerd[1695]: time="2026-01-14T23:48:44.978253994Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 23:48:45.310213 containerd[1695]: time="2026-01-14T23:48:45.310040208Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:48:45.311533 containerd[1695]: time="2026-01-14T23:48:45.311398772Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 23:48:45.311533 containerd[1695]: time="2026-01-14T23:48:45.311489772Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 23:48:45.311952 kubelet[2898]: E0114 23:48:45.311733 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:48:45.311952 kubelet[2898]: E0114 23:48:45.311777 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:48:45.311952 kubelet[2898]: E0114 23:48:45.311881 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x7cql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f94899ccb-pnwbr_calico-system(63f0b6ec-9977-4e0c-b6a6-80408e82ee47): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 23:48:45.313313 kubelet[2898]: E0114 23:48:45.313242 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:48:47.632794 kubelet[2898]: E0114 23:48:47.632707 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:48:52.632584 containerd[1695]: time="2026-01-14T23:48:52.632522138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:48:52.985109 containerd[1695]: time="2026-01-14T23:48:52.984704774Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:48:52.986377 containerd[1695]: time="2026-01-14T23:48:52.986282818Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:48:52.986497 containerd[1695]: time="2026-01-14T23:48:52.986353579Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:48:52.986675 kubelet[2898]: E0114 23:48:52.986643 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:48:52.987237 kubelet[2898]: E0114 23:48:52.986989 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:48:52.987237 kubelet[2898]: E0114 23:48:52.987180 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tfj59,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b767987c5-49kdx_calico-apiserver(5eca9ff5-ed57-4795-b82c-c2e2b81c8474): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:48:52.988367 kubelet[2898]: E0114 23:48:52.988332 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:48:53.633156 containerd[1695]: time="2026-01-14T23:48:53.633117594Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 23:48:53.965418 containerd[1695]: time="2026-01-14T23:48:53.965250489Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:48:53.966830 containerd[1695]: time="2026-01-14T23:48:53.966792494Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 23:48:53.967059 containerd[1695]: time="2026-01-14T23:48:53.966864694Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 23:48:53.967098 kubelet[2898]: E0114 23:48:53.966989 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:48:53.967289 kubelet[2898]: E0114 23:48:53.967183 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:48:53.967426 kubelet[2898]: E0114 23:48:53.967390 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wx4j2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2lqxs_calico-system(5c454d6a-8fe3-46dd-a39b-d216b7be481d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 23:48:53.969435 containerd[1695]: time="2026-01-14T23:48:53.969415342Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 23:48:54.317471 containerd[1695]: time="2026-01-14T23:48:54.317421445Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:48:54.319224 containerd[1695]: time="2026-01-14T23:48:54.319181050Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 23:48:54.319917 containerd[1695]: time="2026-01-14T23:48:54.319869292Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 23:48:54.320181 kubelet[2898]: E0114 23:48:54.320141 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:48:54.320473 kubelet[2898]: E0114 23:48:54.320193 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:48:54.320473 kubelet[2898]: E0114 23:48:54.320317 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wx4j2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2lqxs_calico-system(5c454d6a-8fe3-46dd-a39b-d216b7be481d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 23:48:54.321528 kubelet[2898]: E0114 23:48:54.321481 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:48:55.633009 containerd[1695]: time="2026-01-14T23:48:55.632960943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 23:48:55.633853 kubelet[2898]: E0114 23:48:55.633582 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:48:55.961699 containerd[1695]: time="2026-01-14T23:48:55.961454387Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:48:55.963054 containerd[1695]: time="2026-01-14T23:48:55.962950952Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 23:48:55.963054 containerd[1695]: time="2026-01-14T23:48:55.962981232Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 23:48:55.963215 kubelet[2898]: E0114 23:48:55.963155 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:48:55.963215 kubelet[2898]: E0114 23:48:55.963205 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:48:55.963384 kubelet[2898]: E0114 23:48:55.963339 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h289k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5sxpk_calico-system(fcec49c5-6358-46d9-9922-8a81fb4bafd8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 23:48:55.964650 kubelet[2898]: E0114 23:48:55.964586 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:48:56.633297 containerd[1695]: time="2026-01-14T23:48:56.632774318Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:48:56.962606 containerd[1695]: time="2026-01-14T23:48:56.962407525Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:48:56.963580 containerd[1695]: time="2026-01-14T23:48:56.963526328Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:48:56.963685 containerd[1695]: time="2026-01-14T23:48:56.963618488Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:48:56.963934 kubelet[2898]: E0114 23:48:56.963875 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:48:56.963934 kubelet[2898]: E0114 23:48:56.963932 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:48:56.964288 kubelet[2898]: E0114 23:48:56.964054 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m5dng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b767987c5-2glxx_calico-apiserver(300b5f0b-ed7c-4a04-a4b8-68a71ea25297): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:48:56.965560 kubelet[2898]: E0114 23:48:56.965520 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:49:02.636481 containerd[1695]: time="2026-01-14T23:49:02.636393498Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 23:49:02.972442 containerd[1695]: time="2026-01-14T23:49:02.972259004Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:49:02.974106 containerd[1695]: time="2026-01-14T23:49:02.974060929Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 23:49:02.974203 containerd[1695]: time="2026-01-14T23:49:02.974129529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 23:49:02.974343 kubelet[2898]: E0114 23:49:02.974302 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:49:02.974629 kubelet[2898]: E0114 23:49:02.974368 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:49:02.974629 kubelet[2898]: E0114 23:49:02.974503 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gb4q4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7cd9b5689c-544p6_calico-system(2d307ca4-cd62-4987-b2dc-ed6b76a2794e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 23:49:02.975721 kubelet[2898]: E0114 23:49:02.975670 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:49:03.632016 kubelet[2898]: E0114 23:49:03.631926 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:49:08.634529 kubelet[2898]: E0114 23:49:08.634466 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:49:08.635843 kubelet[2898]: E0114 23:49:08.634865 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:49:09.633176 kubelet[2898]: E0114 23:49:09.633121 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:49:10.635277 kubelet[2898]: E0114 23:49:10.635160 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:49:14.632771 kubelet[2898]: E0114 23:49:14.632711 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:49:15.639072 kubelet[2898]: E0114 23:49:15.637565 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:49:22.633150 kubelet[2898]: E0114 23:49:22.633088 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:49:23.632996 kubelet[2898]: E0114 23:49:23.632952 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:49:24.633048 kubelet[2898]: E0114 23:49:24.632738 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:49:25.634068 containerd[1695]: time="2026-01-14T23:49:25.633420669Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 23:49:25.974319 containerd[1695]: time="2026-01-14T23:49:25.973993429Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:49:25.977336 containerd[1695]: time="2026-01-14T23:49:25.977288239Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 23:49:25.977563 containerd[1695]: time="2026-01-14T23:49:25.977314199Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 23:49:25.977602 kubelet[2898]: E0114 23:49:25.977540 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:49:25.977602 kubelet[2898]: E0114 23:49:25.977593 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:49:25.978243 kubelet[2898]: E0114 23:49:25.978095 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c6dcdc7d9611441ca8bf87758bc85c38,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x7cql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f94899ccb-pnwbr_calico-system(63f0b6ec-9977-4e0c-b6a6-80408e82ee47): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 23:49:25.981261 containerd[1695]: time="2026-01-14T23:49:25.981219251Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 23:49:26.314126 containerd[1695]: time="2026-01-14T23:49:26.314050908Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:49:26.316108 containerd[1695]: time="2026-01-14T23:49:26.316050914Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 23:49:26.317025 containerd[1695]: time="2026-01-14T23:49:26.316232955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 23:49:26.317212 kubelet[2898]: E0114 23:49:26.317167 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:49:26.317280 kubelet[2898]: E0114 23:49:26.317216 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:49:26.317397 kubelet[2898]: E0114 23:49:26.317350 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x7cql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f94899ccb-pnwbr_calico-system(63f0b6ec-9977-4e0c-b6a6-80408e82ee47): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 23:49:26.318811 kubelet[2898]: E0114 23:49:26.318755 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:49:27.632516 kubelet[2898]: E0114 23:49:27.632454 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:49:28.634528 kubelet[2898]: E0114 23:49:28.634447 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:49:35.633195 kubelet[2898]: E0114 23:49:35.632807 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:49:36.632351 containerd[1695]: time="2026-01-14T23:49:36.632304748Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 23:49:36.965090 containerd[1695]: time="2026-01-14T23:49:36.964487643Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:49:36.969828 containerd[1695]: time="2026-01-14T23:49:36.969747299Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 23:49:36.969953 containerd[1695]: time="2026-01-14T23:49:36.969807459Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 23:49:36.969997 kubelet[2898]: E0114 23:49:36.969964 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:49:36.970296 kubelet[2898]: E0114 23:49:36.970010 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:49:36.970296 kubelet[2898]: E0114 23:49:36.970112 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wx4j2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2lqxs_calico-system(5c454d6a-8fe3-46dd-a39b-d216b7be481d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 23:49:36.972998 containerd[1695]: time="2026-01-14T23:49:36.972233987Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 23:49:37.307598 containerd[1695]: time="2026-01-14T23:49:37.307434491Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:49:37.308827 containerd[1695]: time="2026-01-14T23:49:37.308778615Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 23:49:37.308897 containerd[1695]: time="2026-01-14T23:49:37.308828535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 23:49:37.309522 kubelet[2898]: E0114 23:49:37.309483 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:49:37.309598 kubelet[2898]: E0114 23:49:37.309545 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:49:37.309864 kubelet[2898]: E0114 23:49:37.309743 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wx4j2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2lqxs_calico-system(5c454d6a-8fe3-46dd-a39b-d216b7be481d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 23:49:37.311677 kubelet[2898]: E0114 23:49:37.311631 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:49:38.636834 containerd[1695]: time="2026-01-14T23:49:38.636757471Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 23:49:38.977690 containerd[1695]: time="2026-01-14T23:49:38.977411832Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:49:38.985126 containerd[1695]: time="2026-01-14T23:49:38.984802095Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 23:49:38.985126 containerd[1695]: time="2026-01-14T23:49:38.984850215Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 23:49:38.987105 kubelet[2898]: E0114 23:49:38.985470 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:49:38.987742 kubelet[2898]: E0114 23:49:38.987500 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:49:38.987742 kubelet[2898]: E0114 23:49:38.987643 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h289k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5sxpk_calico-system(fcec49c5-6358-46d9-9922-8a81fb4bafd8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 23:49:38.988887 kubelet[2898]: E0114 23:49:38.988839 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:49:40.635066 containerd[1695]: time="2026-01-14T23:49:40.635005216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:49:40.636961 kubelet[2898]: E0114 23:49:40.636561 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:49:40.967974 containerd[1695]: time="2026-01-14T23:49:40.967851993Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:49:40.969923 containerd[1695]: time="2026-01-14T23:49:40.969877159Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:49:40.970029 containerd[1695]: time="2026-01-14T23:49:40.969971159Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:49:40.970263 kubelet[2898]: E0114 23:49:40.970188 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:49:40.970326 kubelet[2898]: E0114 23:49:40.970273 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:49:40.970458 kubelet[2898]: E0114 23:49:40.970408 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tfj59,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b767987c5-49kdx_calico-apiserver(5eca9ff5-ed57-4795-b82c-c2e2b81c8474): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:49:40.971547 kubelet[2898]: E0114 23:49:40.971519 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:49:41.632073 kubelet[2898]: E0114 23:49:41.631962 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:49:48.634648 kubelet[2898]: E0114 23:49:48.634361 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:49:50.632404 kubelet[2898]: E0114 23:49:50.632355 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:49:50.634658 containerd[1695]: time="2026-01-14T23:49:50.634598442Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:49:50.961825 containerd[1695]: time="2026-01-14T23:49:50.961423961Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:49:50.964025 containerd[1695]: time="2026-01-14T23:49:50.963960329Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:49:50.964133 containerd[1695]: time="2026-01-14T23:49:50.964002409Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:49:50.964238 kubelet[2898]: E0114 23:49:50.964199 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:49:50.964290 kubelet[2898]: E0114 23:49:50.964250 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:49:50.964445 kubelet[2898]: E0114 23:49:50.964395 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m5dng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b767987c5-2glxx_calico-apiserver(300b5f0b-ed7c-4a04-a4b8-68a71ea25297): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:49:50.965664 kubelet[2898]: E0114 23:49:50.965619 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:49:52.636137 kubelet[2898]: E0114 23:49:52.636089 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:49:53.633216 kubelet[2898]: E0114 23:49:53.633158 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:49:56.632306 containerd[1695]: time="2026-01-14T23:49:56.632246284Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 23:49:56.960921 containerd[1695]: time="2026-01-14T23:49:56.960291566Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:49:56.961917 containerd[1695]: time="2026-01-14T23:49:56.961821611Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 23:49:56.961917 containerd[1695]: time="2026-01-14T23:49:56.961867691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 23:49:56.962151 kubelet[2898]: E0114 23:49:56.962112 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:49:56.962704 kubelet[2898]: E0114 23:49:56.962492 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:49:56.962704 kubelet[2898]: E0114 23:49:56.962648 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gb4q4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7cd9b5689c-544p6_calico-system(2d307ca4-cd62-4987-b2dc-ed6b76a2794e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 23:49:56.963845 kubelet[2898]: E0114 23:49:56.963802 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:50:01.633163 kubelet[2898]: E0114 23:50:01.632757 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:50:02.634360 kubelet[2898]: E0114 23:50:02.634294 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:50:04.633053 kubelet[2898]: E0114 23:50:04.632984 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:50:05.631736 kubelet[2898]: E0114 23:50:05.631668 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:50:06.633290 kubelet[2898]: E0114 23:50:06.633225 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:50:09.633395 kubelet[2898]: E0114 23:50:09.633347 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:50:13.633141 kubelet[2898]: E0114 23:50:13.633033 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:50:14.123827 containerd[1695]: time="2026-01-14T23:50:14.123622157Z" level=info msg="container event discarded" container=73955e808bb581e4bdc7aa2d5959ccdb604f9ee318c1f32b7c04db31e6ab18b5 type=CONTAINER_CREATED_EVENT Jan 14 23:50:14.135123 containerd[1695]: time="2026-01-14T23:50:14.135049592Z" level=info msg="container event discarded" container=73955e808bb581e4bdc7aa2d5959ccdb604f9ee318c1f32b7c04db31e6ab18b5 type=CONTAINER_STARTED_EVENT Jan 14 23:50:14.135123 containerd[1695]: time="2026-01-14T23:50:14.135109392Z" level=info msg="container event discarded" container=eba0db63ffbcbbb606fbeaf52c4ff89d68c4a9d2019cbdbf216e3e64071abd95 type=CONTAINER_CREATED_EVENT Jan 14 23:50:14.135123 containerd[1695]: time="2026-01-14T23:50:14.135121312Z" level=info msg="container event discarded" container=eba0db63ffbcbbb606fbeaf52c4ff89d68c4a9d2019cbdbf216e3e64071abd95 type=CONTAINER_STARTED_EVENT Jan 14 23:50:14.155601 containerd[1695]: time="2026-01-14T23:50:14.155519334Z" level=info msg="container event discarded" container=e9f4fe1f8799ede63f0c2482ae40290f8475e1a9cf4647c08e39df2ce1e09de0 type=CONTAINER_CREATED_EVENT Jan 14 23:50:14.155601 containerd[1695]: time="2026-01-14T23:50:14.155558934Z" level=info msg="container event discarded" container=e9f4fe1f8799ede63f0c2482ae40290f8475e1a9cf4647c08e39df2ce1e09de0 type=CONTAINER_STARTED_EVENT Jan 14 23:50:14.155601 containerd[1695]: time="2026-01-14T23:50:14.155567414Z" level=info msg="container event discarded" container=db5c219f9a2c0d3e9646e089d1e926b2f9aba6880c56630dad6061209602f04a type=CONTAINER_CREATED_EVENT Jan 14 23:50:14.155601 containerd[1695]: time="2026-01-14T23:50:14.155577614Z" level=info msg="container event discarded" container=8bea3fa176370ca67966494230d459d02bf23240301737428cb9f8cbb74ea206 type=CONTAINER_CREATED_EVENT Jan 14 23:50:14.185842 containerd[1695]: time="2026-01-14T23:50:14.185770147Z" level=info msg="container event discarded" container=dff0729402de1bf5093fca769b0a24b587f7578a2d449728d4aa31ed19490d92 type=CONTAINER_CREATED_EVENT Jan 14 23:50:14.243131 containerd[1695]: time="2026-01-14T23:50:14.243068802Z" level=info msg="container event discarded" container=8bea3fa176370ca67966494230d459d02bf23240301737428cb9f8cbb74ea206 type=CONTAINER_STARTED_EVENT Jan 14 23:50:14.243131 containerd[1695]: time="2026-01-14T23:50:14.243114842Z" level=info msg="container event discarded" container=db5c219f9a2c0d3e9646e089d1e926b2f9aba6880c56630dad6061209602f04a type=CONTAINER_STARTED_EVENT Jan 14 23:50:14.265839 containerd[1695]: time="2026-01-14T23:50:14.265657031Z" level=info msg="container event discarded" container=dff0729402de1bf5093fca769b0a24b587f7578a2d449728d4aa31ed19490d92 type=CONTAINER_STARTED_EVENT Jan 14 23:50:16.636518 kubelet[2898]: E0114 23:50:16.636375 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:50:16.636990 kubelet[2898]: E0114 23:50:16.636588 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:50:18.633388 kubelet[2898]: E0114 23:50:18.633257 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:50:20.633753 kubelet[2898]: E0114 23:50:20.633623 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:50:24.632591 kubelet[2898]: E0114 23:50:24.632479 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:50:24.931005 containerd[1695]: time="2026-01-14T23:50:24.930752410Z" level=info msg="container event discarded" container=f086fe52eeac56cbc4f3b70be6ec876d874841f7084dbf19aa22b0aade7de760 type=CONTAINER_CREATED_EVENT Jan 14 23:50:24.931005 containerd[1695]: time="2026-01-14T23:50:24.930841651Z" level=info msg="container event discarded" container=f086fe52eeac56cbc4f3b70be6ec876d874841f7084dbf19aa22b0aade7de760 type=CONTAINER_STARTED_EVENT Jan 14 23:50:24.952202 containerd[1695]: time="2026-01-14T23:50:24.952128476Z" level=info msg="container event discarded" container=8881301d222e9933633de80700d1424bab8b9b61eb350ead786763ac04552c49 type=CONTAINER_CREATED_EVENT Jan 14 23:50:25.004202 systemd[1797]: Created slice background.slice - User Background Tasks Slice. Jan 14 23:50:25.005260 systemd[1797]: Starting systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories... Jan 14 23:50:25.035457 systemd[1797]: Finished systemd-tmpfiles-clean.service - Cleanup of User's Temporary Files and Directories. Jan 14 23:50:25.044415 containerd[1695]: time="2026-01-14T23:50:25.044331397Z" level=info msg="container event discarded" container=8881301d222e9933633de80700d1424bab8b9b61eb350ead786763ac04552c49 type=CONTAINER_STARTED_EVENT Jan 14 23:50:25.098788 containerd[1695]: time="2026-01-14T23:50:25.098691923Z" level=info msg="container event discarded" container=eb18736f7ce953dd0b89bc3f4370648bbfc6543fef46921f2d98a108f4465ef7 type=CONTAINER_CREATED_EVENT Jan 14 23:50:25.098788 containerd[1695]: time="2026-01-14T23:50:25.098746804Z" level=info msg="container event discarded" container=eb18736f7ce953dd0b89bc3f4370648bbfc6543fef46921f2d98a108f4465ef7 type=CONTAINER_STARTED_EVENT Jan 14 23:50:25.632722 kubelet[2898]: E0114 23:50:25.632670 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:50:27.392397 containerd[1695]: time="2026-01-14T23:50:27.392324690Z" level=info msg="container event discarded" container=146be48759f300999a13f366a5fca7d56c04434cbfef1155bdc146dc6ff8b1ae type=CONTAINER_CREATED_EVENT Jan 14 23:50:27.440290 containerd[1695]: time="2026-01-14T23:50:27.439320994Z" level=info msg="container event discarded" container=146be48759f300999a13f366a5fca7d56c04434cbfef1155bdc146dc6ff8b1ae type=CONTAINER_STARTED_EVENT Jan 14 23:50:29.633698 kubelet[2898]: E0114 23:50:29.633649 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:50:30.633081 kubelet[2898]: E0114 23:50:30.633026 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:50:31.632518 kubelet[2898]: E0114 23:50:31.632449 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:50:33.632207 kubelet[2898]: E0114 23:50:33.632123 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:50:39.632570 kubelet[2898]: E0114 23:50:39.632515 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:50:39.633032 kubelet[2898]: E0114 23:50:39.632593 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:50:42.295690 containerd[1695]: time="2026-01-14T23:50:42.295636217Z" level=info msg="container event discarded" container=56ded7c76b7fadc2b01b4305c9c9f232af8d1b34347114fa2e2ca1f1a460820b type=CONTAINER_CREATED_EVENT Jan 14 23:50:42.295690 containerd[1695]: time="2026-01-14T23:50:42.295685617Z" level=info msg="container event discarded" container=56ded7c76b7fadc2b01b4305c9c9f232af8d1b34347114fa2e2ca1f1a460820b type=CONTAINER_STARTED_EVENT Jan 14 23:50:42.435529 containerd[1695]: time="2026-01-14T23:50:42.435474164Z" level=info msg="container event discarded" container=63e7aedb5481343e24930635c30b37aed60ef91c4461788c321a9b867779b555 type=CONTAINER_CREATED_EVENT Jan 14 23:50:42.435711 containerd[1695]: time="2026-01-14T23:50:42.435683844Z" level=info msg="container event discarded" container=63e7aedb5481343e24930635c30b37aed60ef91c4461788c321a9b867779b555 type=CONTAINER_STARTED_EVENT Jan 14 23:50:42.639852 kubelet[2898]: E0114 23:50:42.637390 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:50:42.640222 kubelet[2898]: E0114 23:50:42.639915 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:50:43.633646 kubelet[2898]: E0114 23:50:43.633600 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:50:44.433499 containerd[1695]: time="2026-01-14T23:50:44.433419667Z" level=info msg="container event discarded" container=1a886fe9a36aa5882eed142cb72ba7bd00e55ece373d8e18d62f36143f95869a type=CONTAINER_CREATED_EVENT Jan 14 23:50:44.507821 containerd[1695]: time="2026-01-14T23:50:44.507741494Z" level=info msg="container event discarded" container=1a886fe9a36aa5882eed142cb72ba7bd00e55ece373d8e18d62f36143f95869a type=CONTAINER_STARTED_EVENT Jan 14 23:50:46.048509 containerd[1695]: time="2026-01-14T23:50:46.048433881Z" level=info msg="container event discarded" container=d57264076b3d81072e57cc47e096c3d3c094d88b1ab4bfe2ccd5d4bf4736a3cd type=CONTAINER_CREATED_EVENT Jan 14 23:50:46.162957 containerd[1695]: time="2026-01-14T23:50:46.162127508Z" level=info msg="container event discarded" container=d57264076b3d81072e57cc47e096c3d3c094d88b1ab4bfe2ccd5d4bf4736a3cd type=CONTAINER_STARTED_EVENT Jan 14 23:50:47.632401 kubelet[2898]: E0114 23:50:47.632356 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:50:48.757685 containerd[1695]: time="2026-01-14T23:50:48.757635557Z" level=info msg="container event discarded" container=d57264076b3d81072e57cc47e096c3d3c094d88b1ab4bfe2ccd5d4bf4736a3cd type=CONTAINER_STOPPED_EVENT Jan 14 23:50:50.633010 kubelet[2898]: E0114 23:50:50.632914 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:50:51.632826 kubelet[2898]: E0114 23:50:51.632775 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:50:53.632433 kubelet[2898]: E0114 23:50:53.632388 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:50:55.633024 kubelet[2898]: E0114 23:50:55.632847 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:50:57.633372 containerd[1695]: time="2026-01-14T23:50:57.632691868Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 23:50:57.966677 containerd[1695]: time="2026-01-14T23:50:57.966484368Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:50:57.969631 containerd[1695]: time="2026-01-14T23:50:57.969537137Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 23:50:57.969631 containerd[1695]: time="2026-01-14T23:50:57.969577337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 23:50:57.969778 kubelet[2898]: E0114 23:50:57.969735 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:50:57.970365 kubelet[2898]: E0114 23:50:57.969783 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:50:57.970365 kubelet[2898]: E0114 23:50:57.969891 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c6dcdc7d9611441ca8bf87758bc85c38,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x7cql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f94899ccb-pnwbr_calico-system(63f0b6ec-9977-4e0c-b6a6-80408e82ee47): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 23:50:57.971864 containerd[1695]: time="2026-01-14T23:50:57.971812744Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 23:50:58.306317 containerd[1695]: time="2026-01-14T23:50:58.306190926Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:50:58.307807 containerd[1695]: time="2026-01-14T23:50:58.307769570Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 23:50:58.307872 containerd[1695]: time="2026-01-14T23:50:58.307822691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 23:50:58.309045 kubelet[2898]: E0114 23:50:58.309000 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:50:58.309112 kubelet[2898]: E0114 23:50:58.309055 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:50:58.309191 kubelet[2898]: E0114 23:50:58.309155 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x7cql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f94899ccb-pnwbr_calico-system(63f0b6ec-9977-4e0c-b6a6-80408e82ee47): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 23:50:58.310356 kubelet[2898]: E0114 23:50:58.310322 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:50:59.632018 kubelet[2898]: E0114 23:50:59.631886 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:51:01.632294 kubelet[2898]: E0114 23:51:01.632241 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:51:02.637290 containerd[1695]: time="2026-01-14T23:51:02.636441234Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 23:51:02.968513 containerd[1695]: time="2026-01-14T23:51:02.968383888Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:51:02.969675 containerd[1695]: time="2026-01-14T23:51:02.969618971Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 23:51:02.969795 containerd[1695]: time="2026-01-14T23:51:02.969641492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 23:51:02.969881 kubelet[2898]: E0114 23:51:02.969844 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:51:02.970132 kubelet[2898]: E0114 23:51:02.969892 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:51:02.970132 kubelet[2898]: E0114 23:51:02.970008 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h289k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5sxpk_calico-system(fcec49c5-6358-46d9-9922-8a81fb4bafd8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 23:51:02.971291 kubelet[2898]: E0114 23:51:02.971242 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:51:05.635732 containerd[1695]: time="2026-01-14T23:51:05.635622876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:51:05.973300 containerd[1695]: time="2026-01-14T23:51:05.973158747Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:51:05.975041 containerd[1695]: time="2026-01-14T23:51:05.974922192Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:51:05.975041 containerd[1695]: time="2026-01-14T23:51:05.974979192Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:51:05.976041 kubelet[2898]: E0114 23:51:05.975365 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:51:05.976041 kubelet[2898]: E0114 23:51:05.975414 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:51:05.976721 kubelet[2898]: E0114 23:51:05.976560 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tfj59,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b767987c5-49kdx_calico-apiserver(5eca9ff5-ed57-4795-b82c-c2e2b81c8474): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:51:05.977889 kubelet[2898]: E0114 23:51:05.977850 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:51:06.635037 containerd[1695]: time="2026-01-14T23:51:06.634866048Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 23:51:06.973323 containerd[1695]: time="2026-01-14T23:51:06.973174962Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:51:06.975215 containerd[1695]: time="2026-01-14T23:51:06.975174088Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 23:51:06.975317 containerd[1695]: time="2026-01-14T23:51:06.975258888Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 23:51:06.976363 kubelet[2898]: E0114 23:51:06.975444 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:51:06.976363 kubelet[2898]: E0114 23:51:06.975490 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:51:06.976363 kubelet[2898]: E0114 23:51:06.975620 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wx4j2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2lqxs_calico-system(5c454d6a-8fe3-46dd-a39b-d216b7be481d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 23:51:06.978182 containerd[1695]: time="2026-01-14T23:51:06.977816096Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 23:51:07.318309 containerd[1695]: time="2026-01-14T23:51:07.317661694Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:51:07.324909 containerd[1695]: time="2026-01-14T23:51:07.324814276Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 23:51:07.325041 containerd[1695]: time="2026-01-14T23:51:07.324862796Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 23:51:07.325148 kubelet[2898]: E0114 23:51:07.325100 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:51:07.325207 kubelet[2898]: E0114 23:51:07.325152 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:51:07.325338 kubelet[2898]: E0114 23:51:07.325294 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wx4j2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2lqxs_calico-system(5c454d6a-8fe3-46dd-a39b-d216b7be481d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 23:51:07.326632 kubelet[2898]: E0114 23:51:07.326580 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:51:10.633916 kubelet[2898]: E0114 23:51:10.633666 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:51:11.632743 containerd[1695]: time="2026-01-14T23:51:11.632649715Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:51:11.963856 containerd[1695]: time="2026-01-14T23:51:11.963619086Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:51:11.964929 containerd[1695]: time="2026-01-14T23:51:11.964825450Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:51:11.964929 containerd[1695]: time="2026-01-14T23:51:11.964859050Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:51:11.965091 kubelet[2898]: E0114 23:51:11.965052 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:51:11.965399 kubelet[2898]: E0114 23:51:11.965102 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:51:11.965399 kubelet[2898]: E0114 23:51:11.965221 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m5dng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b767987c5-2glxx_calico-apiserver(300b5f0b-ed7c-4a04-a4b8-68a71ea25297): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:51:11.966754 kubelet[2898]: E0114 23:51:11.966713 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:51:13.632454 kubelet[2898]: E0114 23:51:13.632407 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:51:15.632723 kubelet[2898]: E0114 23:51:15.632660 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:51:19.633591 kubelet[2898]: E0114 23:51:19.633282 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:51:20.634814 kubelet[2898]: E0114 23:51:20.634715 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:51:21.635254 kubelet[2898]: E0114 23:51:21.635032 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:51:24.632498 kubelet[2898]: E0114 23:51:24.632451 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:51:24.633616 containerd[1695]: time="2026-01-14T23:51:24.632900468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 23:51:24.985653 containerd[1695]: time="2026-01-14T23:51:24.985412665Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:51:24.992594 containerd[1695]: time="2026-01-14T23:51:24.992140646Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 23:51:24.993296 containerd[1695]: time="2026-01-14T23:51:24.992250526Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 23:51:24.993367 kubelet[2898]: E0114 23:51:24.992903 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:51:24.993367 kubelet[2898]: E0114 23:51:24.992948 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:51:24.995510 kubelet[2898]: E0114 23:51:24.993512 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gb4q4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7cd9b5689c-544p6_calico-system(2d307ca4-cd62-4987-b2dc-ed6b76a2794e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 23:51:24.996892 kubelet[2898]: E0114 23:51:24.996850 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:51:26.634187 kubelet[2898]: E0114 23:51:26.634145 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:51:31.632401 kubelet[2898]: E0114 23:51:31.632348 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:51:33.633200 kubelet[2898]: E0114 23:51:33.633154 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:51:35.633468 kubelet[2898]: E0114 23:51:35.633287 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:51:36.632294 kubelet[2898]: E0114 23:51:36.632240 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:51:37.633553 kubelet[2898]: E0114 23:51:37.633504 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:51:40.745449 update_engine[1674]: I20260114 23:51:40.745385 1674 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 14 23:51:40.745449 update_engine[1674]: I20260114 23:51:40.745435 1674 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 14 23:51:40.745805 update_engine[1674]: I20260114 23:51:40.745715 1674 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 14 23:51:40.746234 update_engine[1674]: I20260114 23:51:40.746192 1674 omaha_request_params.cc:62] Current group set to beta Jan 14 23:51:40.746327 update_engine[1674]: I20260114 23:51:40.746310 1674 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 14 23:51:40.746327 update_engine[1674]: I20260114 23:51:40.746323 1674 update_attempter.cc:643] Scheduling an action processor start. Jan 14 23:51:40.746385 update_engine[1674]: I20260114 23:51:40.746340 1674 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 14 23:51:40.746626 update_engine[1674]: I20260114 23:51:40.746539 1674 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 14 23:51:40.746626 update_engine[1674]: I20260114 23:51:40.746589 1674 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 14 23:51:40.746626 update_engine[1674]: I20260114 23:51:40.746596 1674 omaha_request_action.cc:272] Request: Jan 14 23:51:40.746626 update_engine[1674]: Jan 14 23:51:40.746626 update_engine[1674]: Jan 14 23:51:40.746626 update_engine[1674]: Jan 14 23:51:40.746626 update_engine[1674]: Jan 14 23:51:40.746626 update_engine[1674]: Jan 14 23:51:40.746626 update_engine[1674]: Jan 14 23:51:40.746626 update_engine[1674]: Jan 14 23:51:40.746626 update_engine[1674]: Jan 14 23:51:40.746626 update_engine[1674]: I20260114 23:51:40.746603 1674 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 23:51:40.746987 locksmithd[1724]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 14 23:51:40.748261 update_engine[1674]: I20260114 23:51:40.748216 1674 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 23:51:40.748964 update_engine[1674]: I20260114 23:51:40.748927 1674 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 23:51:40.758296 update_engine[1674]: E20260114 23:51:40.757943 1674 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 23:51:40.758296 update_engine[1674]: I20260114 23:51:40.758041 1674 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 14 23:51:41.632526 kubelet[2898]: E0114 23:51:41.632485 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:51:46.633742 kubelet[2898]: E0114 23:51:46.633663 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:51:47.632636 kubelet[2898]: E0114 23:51:47.632577 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:51:48.633995 kubelet[2898]: E0114 23:51:48.633814 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:51:48.635013 kubelet[2898]: E0114 23:51:48.634192 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:51:50.654386 update_engine[1674]: I20260114 23:51:50.654305 1674 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 23:51:50.654733 update_engine[1674]: I20260114 23:51:50.654398 1674 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 23:51:50.654760 update_engine[1674]: I20260114 23:51:50.654735 1674 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 23:51:50.667286 update_engine[1674]: E20260114 23:51:50.666311 1674 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 23:51:50.667286 update_engine[1674]: I20260114 23:51:50.666407 1674 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 14 23:51:51.632574 kubelet[2898]: E0114 23:51:51.632500 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:51:56.632738 kubelet[2898]: E0114 23:51:56.632681 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:51:59.632965 kubelet[2898]: E0114 23:51:59.632585 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:51:59.633783 kubelet[2898]: E0114 23:51:59.633396 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:51:59.633783 kubelet[2898]: E0114 23:51:59.633733 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:52:00.654704 update_engine[1674]: I20260114 23:52:00.654626 1674 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 23:52:00.655326 update_engine[1674]: I20260114 23:52:00.654731 1674 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 23:52:00.655326 update_engine[1674]: I20260114 23:52:00.655105 1674 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 23:52:00.660443 update_engine[1674]: E20260114 23:52:00.660398 1674 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 23:52:00.660518 update_engine[1674]: I20260114 23:52:00.660486 1674 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 14 23:52:01.633089 kubelet[2898]: E0114 23:52:01.633026 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:52:04.632047 kubelet[2898]: E0114 23:52:04.631936 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:52:09.632562 kubelet[2898]: E0114 23:52:09.632516 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:52:10.633290 kubelet[2898]: E0114 23:52:10.633142 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:52:10.648331 update_engine[1674]: I20260114 23:52:10.648077 1674 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 23:52:10.648331 update_engine[1674]: I20260114 23:52:10.648190 1674 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 23:52:10.649087 update_engine[1674]: I20260114 23:52:10.649057 1674 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 23:52:10.655333 update_engine[1674]: E20260114 23:52:10.654691 1674 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 23:52:10.655333 update_engine[1674]: I20260114 23:52:10.654839 1674 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 14 23:52:10.655333 update_engine[1674]: I20260114 23:52:10.654860 1674 omaha_request_action.cc:617] Omaha request response: Jan 14 23:52:10.655333 update_engine[1674]: E20260114 23:52:10.655009 1674 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 14 23:52:10.655333 update_engine[1674]: I20260114 23:52:10.655046 1674 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 14 23:52:10.655333 update_engine[1674]: I20260114 23:52:10.655061 1674 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 14 23:52:10.655333 update_engine[1674]: I20260114 23:52:10.655069 1674 update_attempter.cc:306] Processing Done. Jan 14 23:52:10.655333 update_engine[1674]: E20260114 23:52:10.655081 1674 update_attempter.cc:619] Update failed. Jan 14 23:52:10.655333 update_engine[1674]: I20260114 23:52:10.655086 1674 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 14 23:52:10.655333 update_engine[1674]: I20260114 23:52:10.655090 1674 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 14 23:52:10.655333 update_engine[1674]: I20260114 23:52:10.655095 1674 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 14 23:52:10.655333 update_engine[1674]: I20260114 23:52:10.655165 1674 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 14 23:52:10.655333 update_engine[1674]: I20260114 23:52:10.655185 1674 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 14 23:52:10.655333 update_engine[1674]: I20260114 23:52:10.655191 1674 omaha_request_action.cc:272] Request: Jan 14 23:52:10.655333 update_engine[1674]: Jan 14 23:52:10.655333 update_engine[1674]: Jan 14 23:52:10.655892 update_engine[1674]: Jan 14 23:52:10.655892 update_engine[1674]: Jan 14 23:52:10.655892 update_engine[1674]: Jan 14 23:52:10.655892 update_engine[1674]: Jan 14 23:52:10.655892 update_engine[1674]: I20260114 23:52:10.655197 1674 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 14 23:52:10.655892 update_engine[1674]: I20260114 23:52:10.655215 1674 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 14 23:52:10.655892 update_engine[1674]: I20260114 23:52:10.655515 1674 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 14 23:52:10.656023 locksmithd[1724]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 14 23:52:10.661596 update_engine[1674]: E20260114 23:52:10.661548 1674 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 14 23:52:10.661680 update_engine[1674]: I20260114 23:52:10.661621 1674 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 14 23:52:10.661680 update_engine[1674]: I20260114 23:52:10.661630 1674 omaha_request_action.cc:617] Omaha request response: Jan 14 23:52:10.661680 update_engine[1674]: I20260114 23:52:10.661636 1674 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 14 23:52:10.661680 update_engine[1674]: I20260114 23:52:10.661641 1674 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 14 23:52:10.661680 update_engine[1674]: I20260114 23:52:10.661646 1674 update_attempter.cc:306] Processing Done. Jan 14 23:52:10.661680 update_engine[1674]: I20260114 23:52:10.661651 1674 update_attempter.cc:310] Error event sent. Jan 14 23:52:10.661680 update_engine[1674]: I20260114 23:52:10.661659 1674 update_check_scheduler.cc:74] Next update check in 43m47s Jan 14 23:52:10.661993 locksmithd[1724]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 14 23:52:11.632819 kubelet[2898]: E0114 23:52:11.632762 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:52:13.625633 systemd[1]: cri-containerd-146be48759f300999a13f366a5fca7d56c04434cbfef1155bdc146dc6ff8b1ae.scope: Deactivated successfully. Jan 14 23:52:13.625989 systemd[1]: cri-containerd-146be48759f300999a13f366a5fca7d56c04434cbfef1155bdc146dc6ff8b1ae.scope: Consumed 51.855s CPU time, 111M memory peak. Jan 14 23:52:13.627067 containerd[1695]: time="2026-01-14T23:52:13.627023056Z" level=info msg="received container exit event container_id:\"146be48759f300999a13f366a5fca7d56c04434cbfef1155bdc146dc6ff8b1ae\" id:\"146be48759f300999a13f366a5fca7d56c04434cbfef1155bdc146dc6ff8b1ae\" pid:3240 exit_status:1 exited_at:{seconds:1768434733 nanos:626730695}" Jan 14 23:52:13.629000 audit: BPF prog-id=146 op=UNLOAD Jan 14 23:52:13.631349 kernel: kauditd_printk_skb: 74 callbacks suppressed Jan 14 23:52:13.631413 kernel: audit: type=1334 audit(1768434733.629:747): prog-id=146 op=UNLOAD Jan 14 23:52:13.629000 audit: BPF prog-id=150 op=UNLOAD Jan 14 23:52:13.633292 kernel: audit: type=1334 audit(1768434733.629:748): prog-id=150 op=UNLOAD Jan 14 23:52:13.647373 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-146be48759f300999a13f366a5fca7d56c04434cbfef1155bdc146dc6ff8b1ae-rootfs.mount: Deactivated successfully. Jan 14 23:52:13.887291 kubelet[2898]: E0114 23:52:13.886996 2898 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.22.230:42736->10.0.22.219:2379: read: connection timed out" Jan 14 23:52:13.889005 systemd[1]: cri-containerd-dff0729402de1bf5093fca769b0a24b587f7578a2d449728d4aa31ed19490d92.scope: Deactivated successfully. Jan 14 23:52:13.889466 systemd[1]: cri-containerd-dff0729402de1bf5093fca769b0a24b587f7578a2d449728d4aa31ed19490d92.scope: Consumed 5.494s CPU time, 24.5M memory peak. Jan 14 23:52:13.889000 audit: BPF prog-id=256 op=LOAD Jan 14 23:52:13.891403 containerd[1695]: time="2026-01-14T23:52:13.890770621Z" level=info msg="received container exit event container_id:\"dff0729402de1bf5093fca769b0a24b587f7578a2d449728d4aa31ed19490d92\" id:\"dff0729402de1bf5093fca769b0a24b587f7578a2d449728d4aa31ed19490d92\" pid:2759 exit_status:1 exited_at:{seconds:1768434733 nanos:890105859}" Jan 14 23:52:13.889000 audit: BPF prog-id=93 op=UNLOAD Jan 14 23:52:13.893006 kernel: audit: type=1334 audit(1768434733.889:749): prog-id=256 op=LOAD Jan 14 23:52:13.893144 kernel: audit: type=1334 audit(1768434733.889:750): prog-id=93 op=UNLOAD Jan 14 23:52:13.892000 audit: BPF prog-id=108 op=UNLOAD Jan 14 23:52:13.892000 audit: BPF prog-id=112 op=UNLOAD Jan 14 23:52:13.895632 kernel: audit: type=1334 audit(1768434733.892:751): prog-id=108 op=UNLOAD Jan 14 23:52:13.895711 kernel: audit: type=1334 audit(1768434733.892:752): prog-id=112 op=UNLOAD Jan 14 23:52:13.912102 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-dff0729402de1bf5093fca769b0a24b587f7578a2d449728d4aa31ed19490d92-rootfs.mount: Deactivated successfully. Jan 14 23:52:14.022730 systemd[1]: cri-containerd-8bea3fa176370ca67966494230d459d02bf23240301737428cb9f8cbb74ea206.scope: Deactivated successfully. Jan 14 23:52:14.023417 systemd[1]: cri-containerd-8bea3fa176370ca67966494230d459d02bf23240301737428cb9f8cbb74ea206.scope: Consumed 6.179s CPU time, 63.7M memory peak. Jan 14 23:52:14.023000 audit: BPF prog-id=257 op=LOAD Jan 14 23:52:14.025016 containerd[1695]: time="2026-01-14T23:52:14.024784911Z" level=info msg="received container exit event container_id:\"8bea3fa176370ca67966494230d459d02bf23240301737428cb9f8cbb74ea206\" id:\"8bea3fa176370ca67966494230d459d02bf23240301737428cb9f8cbb74ea206\" pid:2730 exit_status:1 exited_at:{seconds:1768434734 nanos:23930028}" Jan 14 23:52:14.023000 audit: BPF prog-id=88 op=UNLOAD Jan 14 23:52:14.026308 kernel: audit: type=1334 audit(1768434734.023:753): prog-id=257 op=LOAD Jan 14 23:52:14.026372 kernel: audit: type=1334 audit(1768434734.023:754): prog-id=88 op=UNLOAD Jan 14 23:52:14.031000 audit: BPF prog-id=103 op=UNLOAD Jan 14 23:52:14.031000 audit: BPF prog-id=107 op=UNLOAD Jan 14 23:52:14.034002 kernel: audit: type=1334 audit(1768434734.031:755): prog-id=103 op=UNLOAD Jan 14 23:52:14.034046 kernel: audit: type=1334 audit(1768434734.031:756): prog-id=107 op=UNLOAD Jan 14 23:52:14.046998 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-8bea3fa176370ca67966494230d459d02bf23240301737428cb9f8cbb74ea206-rootfs.mount: Deactivated successfully. Jan 14 23:52:14.528570 kubelet[2898]: I0114 23:52:14.528500 2898 scope.go:117] "RemoveContainer" containerID="dff0729402de1bf5093fca769b0a24b587f7578a2d449728d4aa31ed19490d92" Jan 14 23:52:14.530374 containerd[1695]: time="2026-01-14T23:52:14.530316415Z" level=info msg="CreateContainer within sandbox \"e9f4fe1f8799ede63f0c2482ae40290f8475e1a9cf4647c08e39df2ce1e09de0\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 14 23:52:14.531090 kubelet[2898]: I0114 23:52:14.530815 2898 scope.go:117] "RemoveContainer" containerID="8bea3fa176370ca67966494230d459d02bf23240301737428cb9f8cbb74ea206" Jan 14 23:52:14.533068 containerd[1695]: time="2026-01-14T23:52:14.533014383Z" level=info msg="CreateContainer within sandbox \"eba0db63ffbcbbb606fbeaf52c4ff89d68c4a9d2019cbdbf216e3e64071abd95\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 14 23:52:14.534165 kubelet[2898]: I0114 23:52:14.533904 2898 scope.go:117] "RemoveContainer" containerID="146be48759f300999a13f366a5fca7d56c04434cbfef1155bdc146dc6ff8b1ae" Jan 14 23:52:14.542700 containerd[1695]: time="2026-01-14T23:52:14.542644813Z" level=info msg="Container 990ffed14b969f2dbc3ff21c5a94068cc02591f987cb43de94d880554d6260f9: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:52:14.547839 containerd[1695]: time="2026-01-14T23:52:14.547788348Z" level=info msg="CreateContainer within sandbox \"eb18736f7ce953dd0b89bc3f4370648bbfc6543fef46921f2d98a108f4465ef7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 14 23:52:14.549296 containerd[1695]: time="2026-01-14T23:52:14.548994712Z" level=info msg="Container c9fffafbe337617857098109cac6f66dbf3fe9d21ab14d120c579b075e3953ae: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:52:14.558938 containerd[1695]: time="2026-01-14T23:52:14.558886182Z" level=info msg="CreateContainer within sandbox \"e9f4fe1f8799ede63f0c2482ae40290f8475e1a9cf4647c08e39df2ce1e09de0\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"990ffed14b969f2dbc3ff21c5a94068cc02591f987cb43de94d880554d6260f9\"" Jan 14 23:52:14.559547 containerd[1695]: time="2026-01-14T23:52:14.559515624Z" level=info msg="StartContainer for \"990ffed14b969f2dbc3ff21c5a94068cc02591f987cb43de94d880554d6260f9\"" Jan 14 23:52:14.560918 containerd[1695]: time="2026-01-14T23:52:14.560887188Z" level=info msg="connecting to shim 990ffed14b969f2dbc3ff21c5a94068cc02591f987cb43de94d880554d6260f9" address="unix:///run/containerd/s/2ed5a4b04d5ba33a844034ea44ea6dc6a2bb7bc80a666fe573555a5ed2a8fae8" protocol=ttrpc version=3 Jan 14 23:52:14.565134 containerd[1695]: time="2026-01-14T23:52:14.565037721Z" level=info msg="CreateContainer within sandbox \"eba0db63ffbcbbb606fbeaf52c4ff89d68c4a9d2019cbdbf216e3e64071abd95\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"c9fffafbe337617857098109cac6f66dbf3fe9d21ab14d120c579b075e3953ae\"" Jan 14 23:52:14.566080 containerd[1695]: time="2026-01-14T23:52:14.566055284Z" level=info msg="StartContainer for \"c9fffafbe337617857098109cac6f66dbf3fe9d21ab14d120c579b075e3953ae\"" Jan 14 23:52:14.567386 containerd[1695]: time="2026-01-14T23:52:14.567357688Z" level=info msg="connecting to shim c9fffafbe337617857098109cac6f66dbf3fe9d21ab14d120c579b075e3953ae" address="unix:///run/containerd/s/4e8ea3ad27e8d5810075e10b08c0e8d908f7a88ab430f2b490a585ea504e0a17" protocol=ttrpc version=3 Jan 14 23:52:14.573534 containerd[1695]: time="2026-01-14T23:52:14.573455067Z" level=info msg="Container 25a3686f02cbd1284d6dd1b7c58cf3c096d662d0d941bf4e3cec7659dc796efe: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:52:14.581485 systemd[1]: Started cri-containerd-990ffed14b969f2dbc3ff21c5a94068cc02591f987cb43de94d880554d6260f9.scope - libcontainer container 990ffed14b969f2dbc3ff21c5a94068cc02591f987cb43de94d880554d6260f9. Jan 14 23:52:14.582482 containerd[1695]: time="2026-01-14T23:52:14.582427974Z" level=info msg="CreateContainer within sandbox \"eb18736f7ce953dd0b89bc3f4370648bbfc6543fef46921f2d98a108f4465ef7\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"25a3686f02cbd1284d6dd1b7c58cf3c096d662d0d941bf4e3cec7659dc796efe\"" Jan 14 23:52:14.583596 containerd[1695]: time="2026-01-14T23:52:14.583562338Z" level=info msg="StartContainer for \"25a3686f02cbd1284d6dd1b7c58cf3c096d662d0d941bf4e3cec7659dc796efe\"" Jan 14 23:52:14.585005 containerd[1695]: time="2026-01-14T23:52:14.584783701Z" level=info msg="connecting to shim 25a3686f02cbd1284d6dd1b7c58cf3c096d662d0d941bf4e3cec7659dc796efe" address="unix:///run/containerd/s/a973a0639db4dd5604a63ce03369335768b2c82b62b1d698552f1c66bd9bf38c" protocol=ttrpc version=3 Jan 14 23:52:14.591523 systemd[1]: Started cri-containerd-c9fffafbe337617857098109cac6f66dbf3fe9d21ab14d120c579b075e3953ae.scope - libcontainer container c9fffafbe337617857098109cac6f66dbf3fe9d21ab14d120c579b075e3953ae. Jan 14 23:52:14.595000 audit: BPF prog-id=258 op=LOAD Jan 14 23:52:14.596000 audit: BPF prog-id=259 op=LOAD Jan 14 23:52:14.596000 audit[5460]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2621 pid=5460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:14.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939306666656431346239363966326462633366663231633561393430 Jan 14 23:52:14.596000 audit: BPF prog-id=259 op=UNLOAD Jan 14 23:52:14.596000 audit[5460]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2621 pid=5460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:14.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939306666656431346239363966326462633366663231633561393430 Jan 14 23:52:14.596000 audit: BPF prog-id=260 op=LOAD Jan 14 23:52:14.596000 audit[5460]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2621 pid=5460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:14.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939306666656431346239363966326462633366663231633561393430 Jan 14 23:52:14.596000 audit: BPF prog-id=261 op=LOAD Jan 14 23:52:14.596000 audit[5460]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2621 pid=5460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:14.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939306666656431346239363966326462633366663231633561393430 Jan 14 23:52:14.596000 audit: BPF prog-id=261 op=UNLOAD Jan 14 23:52:14.596000 audit[5460]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2621 pid=5460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:14.596000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939306666656431346239363966326462633366663231633561393430 Jan 14 23:52:14.597000 audit: BPF prog-id=260 op=UNLOAD Jan 14 23:52:14.597000 audit[5460]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2621 pid=5460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:14.597000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939306666656431346239363966326462633366663231633561393430 Jan 14 23:52:14.597000 audit: BPF prog-id=262 op=LOAD Jan 14 23:52:14.597000 audit[5460]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2621 pid=5460 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:14.597000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3939306666656431346239363966326462633366663231633561393430 Jan 14 23:52:14.605479 systemd[1]: Started cri-containerd-25a3686f02cbd1284d6dd1b7c58cf3c096d662d0d941bf4e3cec7659dc796efe.scope - libcontainer container 25a3686f02cbd1284d6dd1b7c58cf3c096d662d0d941bf4e3cec7659dc796efe. Jan 14 23:52:14.610000 audit: BPF prog-id=263 op=LOAD Jan 14 23:52:14.611000 audit: BPF prog-id=264 op=LOAD Jan 14 23:52:14.611000 audit[5466]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe180 a2=98 a3=0 items=0 ppid=2605 pid=5466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:14.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339666666616662653333373631373835373039383130396361633666 Jan 14 23:52:14.611000 audit: BPF prog-id=264 op=UNLOAD Jan 14 23:52:14.611000 audit[5466]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2605 pid=5466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:14.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339666666616662653333373631373835373039383130396361633666 Jan 14 23:52:14.612000 audit: BPF prog-id=265 op=LOAD Jan 14 23:52:14.612000 audit[5466]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=2605 pid=5466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:14.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339666666616662653333373631373835373039383130396361633666 Jan 14 23:52:14.612000 audit: BPF prog-id=266 op=LOAD Jan 14 23:52:14.612000 audit[5466]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=2605 pid=5466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:14.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339666666616662653333373631373835373039383130396361633666 Jan 14 23:52:14.612000 audit: BPF prog-id=266 op=UNLOAD Jan 14 23:52:14.612000 audit[5466]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2605 pid=5466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:14.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339666666616662653333373631373835373039383130396361633666 Jan 14 23:52:14.612000 audit: BPF prog-id=265 op=UNLOAD Jan 14 23:52:14.612000 audit[5466]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2605 pid=5466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:14.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339666666616662653333373631373835373039383130396361633666 Jan 14 23:52:14.612000 audit: BPF prog-id=267 op=LOAD Jan 14 23:52:14.612000 audit[5466]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=2605 pid=5466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:14.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6339666666616662653333373631373835373039383130396361633666 Jan 14 23:52:14.622000 audit: BPF prog-id=268 op=LOAD Jan 14 23:52:14.624000 audit: BPF prog-id=269 op=LOAD Jan 14 23:52:14.624000 audit[5491]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=3039 pid=5491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:14.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613336383666303263626431323834643664643162376335386366 Jan 14 23:52:14.624000 audit: BPF prog-id=269 op=UNLOAD Jan 14 23:52:14.624000 audit[5491]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3039 pid=5491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:14.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613336383666303263626431323834643664643162376335386366 Jan 14 23:52:14.624000 audit: BPF prog-id=270 op=LOAD Jan 14 23:52:14.624000 audit[5491]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=3039 pid=5491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:14.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613336383666303263626431323834643664643162376335386366 Jan 14 23:52:14.624000 audit: BPF prog-id=271 op=LOAD Jan 14 23:52:14.624000 audit[5491]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=3039 pid=5491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:14.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613336383666303263626431323834643664643162376335386366 Jan 14 23:52:14.624000 audit: BPF prog-id=271 op=UNLOAD Jan 14 23:52:14.624000 audit[5491]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3039 pid=5491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:14.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613336383666303263626431323834643664643162376335386366 Jan 14 23:52:14.624000 audit: BPF prog-id=270 op=UNLOAD Jan 14 23:52:14.624000 audit[5491]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3039 pid=5491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:14.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613336383666303263626431323834643664643162376335386366 Jan 14 23:52:14.624000 audit: BPF prog-id=272 op=LOAD Jan 14 23:52:14.624000 audit[5491]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=3039 pid=5491 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:14.624000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3235613336383666303263626431323834643664643162376335386366 Jan 14 23:52:14.632998 kubelet[2898]: E0114 23:52:14.632945 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:52:14.639478 containerd[1695]: time="2026-01-14T23:52:14.639414708Z" level=info msg="StartContainer for \"990ffed14b969f2dbc3ff21c5a94068cc02591f987cb43de94d880554d6260f9\" returns successfully" Jan 14 23:52:14.657014 containerd[1695]: time="2026-01-14T23:52:14.656949602Z" level=info msg="StartContainer for \"25a3686f02cbd1284d6dd1b7c58cf3c096d662d0d941bf4e3cec7659dc796efe\" returns successfully" Jan 14 23:52:14.660960 containerd[1695]: time="2026-01-14T23:52:14.660850254Z" level=info msg="StartContainer for \"c9fffafbe337617857098109cac6f66dbf3fe9d21ab14d120c579b075e3953ae\" returns successfully" Jan 14 23:52:15.102623 kubelet[2898]: E0114 23:52:15.102515 2898 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.22.230:42592->10.0.22.219:2379: read: connection timed out" event="&Event{ObjectMeta:{calico-apiserver-5b767987c5-2glxx.188abdce6eb115e2 calico-apiserver 1947 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-apiserver,Name:calico-apiserver-5b767987c5-2glxx,UID:300b5f0b-ed7c-4a04-a4b8-68a71ea25297,APIVersion:v1,ResourceVersion:1056,FieldPath:spec.containers{calico-apiserver},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4515-1-0-n-1d3be4f164,},FirstTimestamp:2026-01-14 23:48:18 +0000 UTC,LastTimestamp:2026-01-14 23:52:04.631896497 +0000 UTC m=+406.081435051,Count:14,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-n-1d3be4f164,}" Jan 14 23:52:16.633205 kubelet[2898]: E0114 23:52:16.633127 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:52:18.633077 kubelet[2898]: E0114 23:52:18.632833 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:52:20.632795 kubelet[2898]: E0114 23:52:20.632698 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:52:23.632867 kubelet[2898]: E0114 23:52:23.632802 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:52:23.632867 kubelet[2898]: E0114 23:52:23.632857 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:52:23.888699 kubelet[2898]: E0114 23:52:23.888398 2898 controller.go:195] "Failed to update lease" err="Put \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": context deadline exceeded" Jan 14 23:52:24.958253 kubelet[2898]: I0114 23:52:24.958189 2898 status_manager.go:890] "Failed to get status for pod" podUID="a0fb221571f90a6b03ac373000837dfe" pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.22.230:42678->10.0.22.219:2379: read: connection timed out" Jan 14 23:52:25.633154 kubelet[2898]: E0114 23:52:25.633066 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:52:25.875121 containerd[1695]: time="2026-01-14T23:52:25.875062871Z" level=info msg="received container exit event container_id:\"25a3686f02cbd1284d6dd1b7c58cf3c096d662d0d941bf4e3cec7659dc796efe\" id:\"25a3686f02cbd1284d6dd1b7c58cf3c096d662d0d941bf4e3cec7659dc796efe\" pid:5512 exit_status:1 exited_at:{seconds:1768434745 nanos:874825630}" Jan 14 23:52:25.875316 systemd[1]: cri-containerd-25a3686f02cbd1284d6dd1b7c58cf3c096d662d0d941bf4e3cec7659dc796efe.scope: Deactivated successfully. Jan 14 23:52:25.883996 kernel: kauditd_printk_skb: 66 callbacks suppressed Jan 14 23:52:25.884107 kernel: audit: type=1334 audit(1768434745.881:781): prog-id=268 op=UNLOAD Jan 14 23:52:25.881000 audit: BPF prog-id=268 op=UNLOAD Jan 14 23:52:25.881000 audit: BPF prog-id=272 op=UNLOAD Jan 14 23:52:25.885331 kernel: audit: type=1334 audit(1768434745.881:782): prog-id=272 op=UNLOAD Jan 14 23:52:25.896074 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-25a3686f02cbd1284d6dd1b7c58cf3c096d662d0d941bf4e3cec7659dc796efe-rootfs.mount: Deactivated successfully. Jan 14 23:52:26.573172 kubelet[2898]: I0114 23:52:26.573126 2898 scope.go:117] "RemoveContainer" containerID="146be48759f300999a13f366a5fca7d56c04434cbfef1155bdc146dc6ff8b1ae" Jan 14 23:52:26.573609 kubelet[2898]: I0114 23:52:26.573444 2898 scope.go:117] "RemoveContainer" containerID="25a3686f02cbd1284d6dd1b7c58cf3c096d662d0d941bf4e3cec7659dc796efe" Jan 14 23:52:26.573609 kubelet[2898]: E0114 23:52:26.573583 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 10s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-hg526_tigera-operator(549af1a4-d10d-41a8-bd81-9ce05836d164)\"" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" Jan 14 23:52:26.575154 containerd[1695]: time="2026-01-14T23:52:26.575113050Z" level=info msg="RemoveContainer for \"146be48759f300999a13f366a5fca7d56c04434cbfef1155bdc146dc6ff8b1ae\"" Jan 14 23:52:26.592728 containerd[1695]: time="2026-01-14T23:52:26.592677303Z" level=info msg="RemoveContainer for \"146be48759f300999a13f366a5fca7d56c04434cbfef1155bdc146dc6ff8b1ae\" returns successfully" Jan 14 23:52:28.633990 kubelet[2898]: E0114 23:52:28.633940 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:52:29.633132 kubelet[2898]: E0114 23:52:29.633077 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:52:31.632221 kubelet[2898]: E0114 23:52:31.632150 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:52:33.889078 kubelet[2898]: E0114 23:52:33.888710 2898 controller.go:195] "Failed to update lease" err="Put \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": context deadline exceeded" Jan 14 23:52:34.632640 kubelet[2898]: E0114 23:52:34.632467 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:52:34.632640 kubelet[2898]: E0114 23:52:34.632569 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:52:38.633062 kubelet[2898]: I0114 23:52:38.632820 2898 scope.go:117] "RemoveContainer" containerID="25a3686f02cbd1284d6dd1b7c58cf3c096d662d0d941bf4e3cec7659dc796efe" Jan 14 23:52:38.634281 kubelet[2898]: E0114 23:52:38.634222 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:52:38.635312 containerd[1695]: time="2026-01-14T23:52:38.635262691Z" level=info msg="CreateContainer within sandbox \"eb18736f7ce953dd0b89bc3f4370648bbfc6543fef46921f2d98a108f4465ef7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:2,}" Jan 14 23:52:38.644532 containerd[1695]: time="2026-01-14T23:52:38.643943317Z" level=info msg="Container a0513d30675a7b6ddadd27d6aa3c901277aec0d6d39b25660325dbe1a1bc2385: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:52:38.650834 containerd[1695]: time="2026-01-14T23:52:38.650781538Z" level=info msg="CreateContainer within sandbox \"eb18736f7ce953dd0b89bc3f4370648bbfc6543fef46921f2d98a108f4465ef7\" for &ContainerMetadata{Name:tigera-operator,Attempt:2,} returns container id \"a0513d30675a7b6ddadd27d6aa3c901277aec0d6d39b25660325dbe1a1bc2385\"" Jan 14 23:52:38.651637 containerd[1695]: time="2026-01-14T23:52:38.651608181Z" level=info msg="StartContainer for \"a0513d30675a7b6ddadd27d6aa3c901277aec0d6d39b25660325dbe1a1bc2385\"" Jan 14 23:52:38.652514 containerd[1695]: time="2026-01-14T23:52:38.652488344Z" level=info msg="connecting to shim a0513d30675a7b6ddadd27d6aa3c901277aec0d6d39b25660325dbe1a1bc2385" address="unix:///run/containerd/s/a973a0639db4dd5604a63ce03369335768b2c82b62b1d698552f1c66bd9bf38c" protocol=ttrpc version=3 Jan 14 23:52:38.673561 systemd[1]: Started cri-containerd-a0513d30675a7b6ddadd27d6aa3c901277aec0d6d39b25660325dbe1a1bc2385.scope - libcontainer container a0513d30675a7b6ddadd27d6aa3c901277aec0d6d39b25660325dbe1a1bc2385. Jan 14 23:52:38.683000 audit: BPF prog-id=273 op=LOAD Jan 14 23:52:38.684000 audit: BPF prog-id=274 op=LOAD Jan 14 23:52:38.686777 kernel: audit: type=1334 audit(1768434758.683:783): prog-id=273 op=LOAD Jan 14 23:52:38.686866 kernel: audit: type=1334 audit(1768434758.684:784): prog-id=274 op=LOAD Jan 14 23:52:38.686895 kernel: audit: type=1300 audit(1768434758.684:784): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe180 a2=98 a3=0 items=0 ppid=3039 pid=5584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:38.684000 audit[5584]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe180 a2=98 a3=0 items=0 ppid=3039 pid=5584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:38.690077 kernel: audit: type=1327 audit(1768434758.684:784): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130353133643330363735613762366464616464323764366161336339 Jan 14 23:52:38.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130353133643330363735613762366464616464323764366161336339 Jan 14 23:52:38.684000 audit: BPF prog-id=274 op=UNLOAD Jan 14 23:52:38.693863 kernel: audit: type=1334 audit(1768434758.684:785): prog-id=274 op=UNLOAD Jan 14 23:52:38.693931 kernel: audit: type=1300 audit(1768434758.684:785): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3039 pid=5584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:38.684000 audit[5584]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3039 pid=5584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:38.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130353133643330363735613762366464616464323764366161336339 Jan 14 23:52:38.700019 kernel: audit: type=1327 audit(1768434758.684:785): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130353133643330363735613762366464616464323764366161336339 Jan 14 23:52:38.684000 audit: BPF prog-id=275 op=LOAD Jan 14 23:52:38.701684 kernel: audit: type=1334 audit(1768434758.684:786): prog-id=275 op=LOAD Jan 14 23:52:38.684000 audit[5584]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=3039 pid=5584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:38.705235 kernel: audit: type=1300 audit(1768434758.684:786): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe3e8 a2=98 a3=0 items=0 ppid=3039 pid=5584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:38.705354 kernel: audit: type=1327 audit(1768434758.684:786): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130353133643330363735613762366464616464323764366161336339 Jan 14 23:52:38.684000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130353133643330363735613762366464616464323764366161336339 Jan 14 23:52:38.688000 audit: BPF prog-id=276 op=LOAD Jan 14 23:52:38.688000 audit[5584]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40000fe168 a2=98 a3=0 items=0 ppid=3039 pid=5584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:38.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130353133643330363735613762366464616464323764366161336339 Jan 14 23:52:38.691000 audit: BPF prog-id=276 op=UNLOAD Jan 14 23:52:38.691000 audit[5584]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3039 pid=5584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:38.691000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130353133643330363735613762366464616464323764366161336339 Jan 14 23:52:38.691000 audit: BPF prog-id=275 op=UNLOAD Jan 14 23:52:38.691000 audit[5584]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3039 pid=5584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:38.691000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130353133643330363735613762366464616464323764366161336339 Jan 14 23:52:38.692000 audit: BPF prog-id=277 op=LOAD Jan 14 23:52:38.692000 audit[5584]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40000fe648 a2=98 a3=0 items=0 ppid=3039 pid=5584 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:52:38.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130353133643330363735613762366464616464323764366161336339 Jan 14 23:52:38.723710 containerd[1695]: time="2026-01-14T23:52:38.723670201Z" level=info msg="StartContainer for \"a0513d30675a7b6ddadd27d6aa3c901277aec0d6d39b25660325dbe1a1bc2385\" returns successfully" Jan 14 23:52:40.632310 kubelet[2898]: E0114 23:52:40.632250 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:52:43.632498 kubelet[2898]: E0114 23:52:43.632257 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:52:43.633162 kubelet[2898]: E0114 23:52:43.633032 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:52:43.892191 kubelet[2898]: E0114 23:52:43.890493 2898 controller.go:195] "Failed to update lease" err="the server was unable to return a response in the time allotted, but may still be processing the request (put leases.coordination.k8s.io ci-4515-1-0-n-1d3be4f164)" Jan 14 23:52:46.633012 kubelet[2898]: E0114 23:52:46.632247 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:52:47.632448 kubelet[2898]: E0114 23:52:47.632405 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:52:49.105549 kubelet[2898]: E0114 23:52:49.105248 2898 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{calico-apiserver-5b767987c5-2glxx.188abdce6eb1471a calico-apiserver 1948 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-apiserver,Name:calico-apiserver-5b767987c5-2glxx,UID:300b5f0b-ed7c-4a04-a4b8-68a71ea25297,APIVersion:v1,ResourceVersion:1056,FieldPath:spec.containers{calico-apiserver},},Reason:Failed,Message:Error: ImagePullBackOff,Source:EventSource{Component:kubelet,Host:ci-4515-1-0-n-1d3be4f164,},FirstTimestamp:2026-01-14 23:48:18 +0000 UTC,LastTimestamp:2026-01-14 23:52:04.631907977 +0000 UTC m=+406.081446491,Count:14,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-n-1d3be4f164,}" Jan 14 23:52:49.633126 kubelet[2898]: E0114 23:52:49.633077 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:52:49.909331 systemd[1]: cri-containerd-a0513d30675a7b6ddadd27d6aa3c901277aec0d6d39b25660325dbe1a1bc2385.scope: Deactivated successfully. Jan 14 23:52:49.910872 containerd[1695]: time="2026-01-14T23:52:49.910825096Z" level=info msg="received container exit event container_id:\"a0513d30675a7b6ddadd27d6aa3c901277aec0d6d39b25660325dbe1a1bc2385\" id:\"a0513d30675a7b6ddadd27d6aa3c901277aec0d6d39b25660325dbe1a1bc2385\" pid:5596 exit_status:1 exited_at:{seconds:1768434769 nanos:910596695}" Jan 14 23:52:49.913000 audit: BPF prog-id=273 op=UNLOAD Jan 14 23:52:49.915961 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 14 23:52:49.916019 kernel: audit: type=1334 audit(1768434769.913:791): prog-id=273 op=UNLOAD Jan 14 23:52:49.913000 audit: BPF prog-id=277 op=UNLOAD Jan 14 23:52:49.917707 kernel: audit: type=1334 audit(1768434769.913:792): prog-id=277 op=UNLOAD Jan 14 23:52:49.934368 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-a0513d30675a7b6ddadd27d6aa3c901277aec0d6d39b25660325dbe1a1bc2385-rootfs.mount: Deactivated successfully. Jan 14 23:52:50.626033 kubelet[2898]: I0114 23:52:50.626005 2898 scope.go:117] "RemoveContainer" containerID="25a3686f02cbd1284d6dd1b7c58cf3c096d662d0d941bf4e3cec7659dc796efe" Jan 14 23:52:50.626569 kubelet[2898]: I0114 23:52:50.626437 2898 scope.go:117] "RemoveContainer" containerID="a0513d30675a7b6ddadd27d6aa3c901277aec0d6d39b25660325dbe1a1bc2385" Jan 14 23:52:50.626620 kubelet[2898]: E0114 23:52:50.626591 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-hg526_tigera-operator(549af1a4-d10d-41a8-bd81-9ce05836d164)\"" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" Jan 14 23:52:50.627721 containerd[1695]: time="2026-01-14T23:52:50.627692045Z" level=info msg="RemoveContainer for \"25a3686f02cbd1284d6dd1b7c58cf3c096d662d0d941bf4e3cec7659dc796efe\"" Jan 14 23:52:50.632591 containerd[1695]: time="2026-01-14T23:52:50.632478660Z" level=info msg="RemoveContainer for \"25a3686f02cbd1284d6dd1b7c58cf3c096d662d0d941bf4e3cec7659dc796efe\" returns successfully" Jan 14 23:52:51.632435 kubelet[2898]: E0114 23:52:51.632369 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:52:53.891092 kubelet[2898]: E0114 23:52:53.890983 2898 controller.go:195] "Failed to update lease" err="Put \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": context deadline exceeded" Jan 14 23:52:53.891092 kubelet[2898]: I0114 23:52:53.891079 2898 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 14 23:52:56.610437 containerd[1695]: time="2026-01-14T23:52:56.610380961Z" level=info msg="container event discarded" container=0b06150c3af7ccd840c6f32355d89d3efd87818c1158576afc6198180289a443 type=CONTAINER_CREATED_EVENT Jan 14 23:52:56.734358 containerd[1695]: time="2026-01-14T23:52:56.734255540Z" level=info msg="container event discarded" container=0b06150c3af7ccd840c6f32355d89d3efd87818c1158576afc6198180289a443 type=CONTAINER_STARTED_EVENT Jan 14 23:52:58.632240 kubelet[2898]: E0114 23:52:58.632137 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:52:58.632641 kubelet[2898]: E0114 23:52:58.632137 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:52:58.633174 kubelet[2898]: E0114 23:52:58.633084 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:52:59.453343 containerd[1695]: time="2026-01-14T23:52:59.453250486Z" level=info msg="container event discarded" container=0b06150c3af7ccd840c6f32355d89d3efd87818c1158576afc6198180289a443 type=CONTAINER_STOPPED_EVENT Jan 14 23:53:00.632489 kubelet[2898]: E0114 23:53:00.632445 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:53:01.633431 kubelet[2898]: E0114 23:53:01.633376 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:53:02.631879 kubelet[2898]: I0114 23:53:02.631675 2898 scope.go:117] "RemoveContainer" containerID="a0513d30675a7b6ddadd27d6aa3c901277aec0d6d39b25660325dbe1a1bc2385" Jan 14 23:53:02.631879 kubelet[2898]: E0114 23:53:02.631834 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 20s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-hg526_tigera-operator(549af1a4-d10d-41a8-bd81-9ce05836d164)\"" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" Jan 14 23:53:03.892474 kubelet[2898]: E0114 23:53:03.892305 2898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="200ms" Jan 14 23:53:04.632144 kubelet[2898]: E0114 23:53:04.632054 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:53:07.396950 containerd[1695]: time="2026-01-14T23:53:07.396895352Z" level=info msg="container event discarded" container=379865f59d1c6d45a47efad7fd278f4278281056549fb1d3476778d9d1f663a8 type=CONTAINER_CREATED_EVENT Jan 14 23:53:07.529369 containerd[1695]: time="2026-01-14T23:53:07.529298796Z" level=info msg="container event discarded" container=379865f59d1c6d45a47efad7fd278f4278281056549fb1d3476778d9d1f663a8 type=CONTAINER_STARTED_EVENT Jan 14 23:53:08.613165 containerd[1695]: time="2026-01-14T23:53:08.613061907Z" level=info msg="container event discarded" container=1ffaddb42e0f0a556bfc5d9b8c82b1df512287da4c9686ebcf850ac0e1bef6bb type=CONTAINER_CREATED_EVENT Jan 14 23:53:08.613165 containerd[1695]: time="2026-01-14T23:53:08.613144947Z" level=info msg="container event discarded" container=1ffaddb42e0f0a556bfc5d9b8c82b1df512287da4c9686ebcf850ac0e1bef6bb type=CONTAINER_STARTED_EVENT Jan 14 23:53:10.633765 kubelet[2898]: E0114 23:53:10.633701 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:53:10.634359 kubelet[2898]: E0114 23:53:10.633942 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:53:10.634359 kubelet[2898]: E0114 23:53:10.634311 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:53:12.633502 kubelet[2898]: E0114 23:53:12.633105 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:53:12.847171 containerd[1695]: time="2026-01-14T23:53:12.847101401Z" level=info msg="container event discarded" container=a57a2bd96525178c049a232772b07d6b9769f87fdcec2dab27a0a9bb82c2fadc type=CONTAINER_CREATED_EVENT Jan 14 23:53:12.847171 containerd[1695]: time="2026-01-14T23:53:12.847152841Z" level=info msg="container event discarded" container=a57a2bd96525178c049a232772b07d6b9769f87fdcec2dab27a0a9bb82c2fadc type=CONTAINER_STARTED_EVENT Jan 14 23:53:13.832751 containerd[1695]: time="2026-01-14T23:53:13.832642172Z" level=info msg="container event discarded" container=2a8f7fb1d2b614b8e3cd46472795e9dd7037206b3514ea221f421b2d649a001d type=CONTAINER_CREATED_EVENT Jan 14 23:53:13.832751 containerd[1695]: time="2026-01-14T23:53:13.832699932Z" level=info msg="container event discarded" container=2a8f7fb1d2b614b8e3cd46472795e9dd7037206b3514ea221f421b2d649a001d type=CONTAINER_STARTED_EVENT Jan 14 23:53:14.093777 kubelet[2898]: E0114 23:53:14.093336 2898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="400ms" Jan 14 23:53:15.632405 kubelet[2898]: E0114 23:53:15.632357 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:53:15.887200 containerd[1695]: time="2026-01-14T23:53:15.887015168Z" level=info msg="container event discarded" container=c0b73d2953592750926f9d7f94522462b916ce3611c9c6ef42c310cd134eb3c2 type=CONTAINER_CREATED_EVENT Jan 14 23:53:15.887200 containerd[1695]: time="2026-01-14T23:53:15.887067928Z" level=info msg="container event discarded" container=c0b73d2953592750926f9d7f94522462b916ce3611c9c6ef42c310cd134eb3c2 type=CONTAINER_STARTED_EVENT Jan 14 23:53:15.919180 containerd[1695]: time="2026-01-14T23:53:15.919069066Z" level=info msg="container event discarded" container=a955a118d0afb8b73abb7a0eb0b08239f84453a3eb674938212d78e4f4fd7ec4 type=CONTAINER_CREATED_EVENT Jan 14 23:53:15.978413 containerd[1695]: time="2026-01-14T23:53:15.978353407Z" level=info msg="container event discarded" container=a955a118d0afb8b73abb7a0eb0b08239f84453a3eb674938212d78e4f4fd7ec4 type=CONTAINER_STARTED_EVENT Jan 14 23:53:16.006597 containerd[1695]: time="2026-01-14T23:53:16.006529053Z" level=info msg="container event discarded" container=b92902e85b0f6a9901475db228d18495c03e37d2f25426dec29cddaa13d3bcf9 type=CONTAINER_CREATED_EVENT Jan 14 23:53:16.006597 containerd[1695]: time="2026-01-14T23:53:16.006573493Z" level=info msg="container event discarded" container=b92902e85b0f6a9901475db228d18495c03e37d2f25426dec29cddaa13d3bcf9 type=CONTAINER_STARTED_EVENT Jan 14 23:53:16.632331 kubelet[2898]: I0114 23:53:16.632246 2898 scope.go:117] "RemoveContainer" containerID="a0513d30675a7b6ddadd27d6aa3c901277aec0d6d39b25660325dbe1a1bc2385" Jan 14 23:53:16.634947 containerd[1695]: time="2026-01-14T23:53:16.634825932Z" level=info msg="CreateContainer within sandbox \"eb18736f7ce953dd0b89bc3f4370648bbfc6543fef46921f2d98a108f4465ef7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:3,}" Jan 14 23:53:16.646693 containerd[1695]: time="2026-01-14T23:53:16.646613728Z" level=info msg="Container aafde8d02900817a0b01573049a86a4df715f2968da15578421d2621c947d83d: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:53:16.655459 containerd[1695]: time="2026-01-14T23:53:16.655413595Z" level=info msg="CreateContainer within sandbox \"eb18736f7ce953dd0b89bc3f4370648bbfc6543fef46921f2d98a108f4465ef7\" for &ContainerMetadata{Name:tigera-operator,Attempt:3,} returns container id \"aafde8d02900817a0b01573049a86a4df715f2968da15578421d2621c947d83d\"" Jan 14 23:53:16.655912 containerd[1695]: time="2026-01-14T23:53:16.655888756Z" level=info msg="StartContainer for \"aafde8d02900817a0b01573049a86a4df715f2968da15578421d2621c947d83d\"" Jan 14 23:53:16.656855 containerd[1695]: time="2026-01-14T23:53:16.656830039Z" level=info msg="connecting to shim aafde8d02900817a0b01573049a86a4df715f2968da15578421d2621c947d83d" address="unix:///run/containerd/s/a973a0639db4dd5604a63ce03369335768b2c82b62b1d698552f1c66bd9bf38c" protocol=ttrpc version=3 Jan 14 23:53:16.680471 systemd[1]: Started cri-containerd-aafde8d02900817a0b01573049a86a4df715f2968da15578421d2621c947d83d.scope - libcontainer container aafde8d02900817a0b01573049a86a4df715f2968da15578421d2621c947d83d. Jan 14 23:53:16.689000 audit: BPF prog-id=278 op=LOAD Jan 14 23:53:16.690000 audit: BPF prog-id=279 op=LOAD Jan 14 23:53:16.692863 kernel: audit: type=1334 audit(1768434796.689:793): prog-id=278 op=LOAD Jan 14 23:53:16.692932 kernel: audit: type=1334 audit(1768434796.690:794): prog-id=279 op=LOAD Jan 14 23:53:16.692954 kernel: audit: type=1300 audit(1768434796.690:794): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3039 pid=5705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:53:16.690000 audit[5705]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3039 pid=5705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:53:16.690000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161666465386430323930303831376130623031353733303439613836 Jan 14 23:53:16.700113 kernel: audit: type=1327 audit(1768434796.690:794): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161666465386430323930303831376130623031353733303439613836 Jan 14 23:53:16.700506 kernel: audit: type=1334 audit(1768434796.690:795): prog-id=279 op=UNLOAD Jan 14 23:53:16.690000 audit: BPF prog-id=279 op=UNLOAD Jan 14 23:53:16.690000 audit[5705]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3039 pid=5705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:53:16.705005 kernel: audit: type=1300 audit(1768434796.690:795): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3039 pid=5705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:53:16.705069 kernel: audit: type=1327 audit(1768434796.690:795): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161666465386430323930303831376130623031353733303439613836 Jan 14 23:53:16.690000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161666465386430323930303831376130623031353733303439613836 Jan 14 23:53:16.708105 kernel: audit: type=1334 audit(1768434796.690:796): prog-id=280 op=LOAD Jan 14 23:53:16.690000 audit: BPF prog-id=280 op=LOAD Jan 14 23:53:16.690000 audit[5705]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3039 pid=5705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:53:16.712261 kernel: audit: type=1300 audit(1768434796.690:796): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3039 pid=5705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:53:16.715988 kernel: audit: type=1327 audit(1768434796.690:796): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161666465386430323930303831376130623031353733303439613836 Jan 14 23:53:16.690000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161666465386430323930303831376130623031353733303439613836 Jan 14 23:53:16.691000 audit: BPF prog-id=281 op=LOAD Jan 14 23:53:16.691000 audit[5705]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3039 pid=5705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:53:16.691000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161666465386430323930303831376130623031353733303439613836 Jan 14 23:53:16.695000 audit: BPF prog-id=281 op=UNLOAD Jan 14 23:53:16.695000 audit[5705]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3039 pid=5705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:53:16.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161666465386430323930303831376130623031353733303439613836 Jan 14 23:53:16.695000 audit: BPF prog-id=280 op=UNLOAD Jan 14 23:53:16.695000 audit[5705]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3039 pid=5705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:53:16.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161666465386430323930303831376130623031353733303439613836 Jan 14 23:53:16.695000 audit: BPF prog-id=282 op=LOAD Jan 14 23:53:16.695000 audit[5705]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3039 pid=5705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:53:16.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161666465386430323930303831376130623031353733303439613836 Jan 14 23:53:16.732102 containerd[1695]: time="2026-01-14T23:53:16.732064309Z" level=info msg="StartContainer for \"aafde8d02900817a0b01573049a86a4df715f2968da15578421d2621c947d83d\" returns successfully" Jan 14 23:53:16.853078 containerd[1695]: time="2026-01-14T23:53:16.852989558Z" level=info msg="container event discarded" container=7f727aab5fd446c2da2aa05f84e3466e67afb1de01869552e582b6d7af5d7b40 type=CONTAINER_CREATED_EVENT Jan 14 23:53:16.853078 containerd[1695]: time="2026-01-14T23:53:16.853050119Z" level=info msg="container event discarded" container=7f727aab5fd446c2da2aa05f84e3466e67afb1de01869552e582b6d7af5d7b40 type=CONTAINER_STARTED_EVENT Jan 14 23:53:17.891980 containerd[1695]: time="2026-01-14T23:53:17.891904892Z" level=info msg="container event discarded" container=240b0d246287407392919a5ac72eeda0e1f2003f305e86ca090cfe46603526d3 type=CONTAINER_CREATED_EVENT Jan 14 23:53:17.891980 containerd[1695]: time="2026-01-14T23:53:17.891970412Z" level=info msg="container event discarded" container=240b0d246287407392919a5ac72eeda0e1f2003f305e86ca090cfe46603526d3 type=CONTAINER_STARTED_EVENT Jan 14 23:53:17.910298 containerd[1695]: time="2026-01-14T23:53:17.910182788Z" level=info msg="container event discarded" container=bb5544d0ae98f7ee4817de3db16d8dd906c8e67b204082873117caa1a94e5c6e type=CONTAINER_CREATED_EVENT Jan 14 23:53:17.956501 containerd[1695]: time="2026-01-14T23:53:17.956436689Z" level=info msg="container event discarded" container=bb5544d0ae98f7ee4817de3db16d8dd906c8e67b204082873117caa1a94e5c6e type=CONTAINER_STARTED_EVENT Jan 14 23:53:18.847430 containerd[1695]: time="2026-01-14T23:53:18.847375131Z" level=info msg="container event discarded" container=95ea2483ffc7cb4cea482fadf5fc213aab2e5655407000427719fd3536ee7adc type=CONTAINER_CREATED_EVENT Jan 14 23:53:18.847627 containerd[1695]: time="2026-01-14T23:53:18.847416211Z" level=info msg="container event discarded" container=95ea2483ffc7cb4cea482fadf5fc213aab2e5655407000427719fd3536ee7adc type=CONTAINER_STARTED_EVENT Jan 14 23:53:19.632681 kubelet[2898]: E0114 23:53:19.632601 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:53:21.632667 kubelet[2898]: E0114 23:53:21.632551 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:53:23.107428 kubelet[2898]: E0114 23:53:23.107312 2898 event.go:359] "Server rejected event (will not retry!)" err="Timeout: request did not complete within requested timeout - context deadline exceeded" event="&Event{ObjectMeta:{kube-apiserver-ci-4515-1-0-n-1d3be4f164.188abe03c8cae192 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4515-1-0-n-1d3be4f164,UID:0b87770b8d26d1b1663c3229f1382cec,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4515-1-0-n-1d3be4f164,},FirstTimestamp:2026-01-14 23:52:07.159259538 +0000 UTC m=+408.608798092,LastTimestamp:2026-01-14 23:52:07.159259538 +0000 UTC m=+408.608798092,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-n-1d3be4f164,}" Jan 14 23:53:23.903585 kubelet[2898]: I0114 23:53:23.903442 2898 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 14 23:53:23.904282 containerd[1695]: time="2026-01-14T23:53:23.904239499Z" level=info msg="StopContainer for \"db5c219f9a2c0d3e9646e089d1e926b2f9aba6880c56630dad6061209602f04a\" with timeout 30 (s)" Jan 14 23:53:23.904778 containerd[1695]: time="2026-01-14T23:53:23.904691540Z" level=info msg="Stop container \"db5c219f9a2c0d3e9646e089d1e926b2f9aba6880c56630dad6061209602f04a\" with signal terminated" Jan 14 23:53:24.495392 kubelet[2898]: E0114 23:53:24.495262 2898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": context deadline exceeded" interval="800ms" Jan 14 23:53:24.633451 kubelet[2898]: E0114 23:53:24.633406 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:53:24.960328 kubelet[2898]: I0114 23:53:24.960238 2898 status_manager.go:890] "Failed to get status for pod" podUID="0b87770b8d26d1b1663c3229f1382cec" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" err="the server was unable to return a response in the time allotted, but may still be processing the request (get pods kube-apiserver-ci-4515-1-0-n-1d3be4f164)" Jan 14 23:53:25.632258 kubelet[2898]: E0114 23:53:25.632219 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:53:25.632869 kubelet[2898]: E0114 23:53:25.632602 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:53:27.922920 systemd[1]: cri-containerd-aafde8d02900817a0b01573049a86a4df715f2968da15578421d2621c947d83d.scope: Deactivated successfully. Jan 14 23:53:27.923965 containerd[1695]: time="2026-01-14T23:53:27.923838938Z" level=info msg="received container exit event container_id:\"aafde8d02900817a0b01573049a86a4df715f2968da15578421d2621c947d83d\" id:\"aafde8d02900817a0b01573049a86a4df715f2968da15578421d2621c947d83d\" pid:5717 exit_status:1 exited_at:{seconds:1768434807 nanos:923104415}" Jan 14 23:53:27.928000 audit: BPF prog-id=278 op=UNLOAD Jan 14 23:53:27.930736 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 14 23:53:27.930803 kernel: audit: type=1334 audit(1768434807.928:801): prog-id=278 op=UNLOAD Jan 14 23:53:27.928000 audit: BPF prog-id=282 op=UNLOAD Jan 14 23:53:27.932311 kernel: audit: type=1334 audit(1768434807.928:802): prog-id=282 op=UNLOAD Jan 14 23:53:27.944450 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-aafde8d02900817a0b01573049a86a4df715f2968da15578421d2621c947d83d-rootfs.mount: Deactivated successfully. Jan 14 23:53:28.632679 kubelet[2898]: E0114 23:53:28.632508 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:53:28.711753 kubelet[2898]: I0114 23:53:28.711416 2898 scope.go:117] "RemoveContainer" containerID="a0513d30675a7b6ddadd27d6aa3c901277aec0d6d39b25660325dbe1a1bc2385" Jan 14 23:53:28.711955 kubelet[2898]: I0114 23:53:28.711776 2898 scope.go:117] "RemoveContainer" containerID="aafde8d02900817a0b01573049a86a4df715f2968da15578421d2621c947d83d" Jan 14 23:53:28.711955 kubelet[2898]: E0114 23:53:28.711914 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-hg526_tigera-operator(549af1a4-d10d-41a8-bd81-9ce05836d164)\"" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" Jan 14 23:53:28.713463 containerd[1695]: time="2026-01-14T23:53:28.713291709Z" level=info msg="RemoveContainer for \"a0513d30675a7b6ddadd27d6aa3c901277aec0d6d39b25660325dbe1a1bc2385\"" Jan 14 23:53:28.717832 containerd[1695]: time="2026-01-14T23:53:28.717787723Z" level=info msg="RemoveContainer for \"a0513d30675a7b6ddadd27d6aa3c901277aec0d6d39b25660325dbe1a1bc2385\" returns successfully" Jan 14 23:53:31.632602 kubelet[2898]: E0114 23:53:31.632558 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:53:33.632493 kubelet[2898]: E0114 23:53:33.632385 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:53:35.297500 kubelet[2898]: E0114 23:53:35.297431 2898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": net/http: request canceled (Client.Timeout exceeded while awaiting headers)" interval="1.6s" Jan 14 23:53:36.633095 kubelet[2898]: E0114 23:53:36.633016 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:53:37.632842 kubelet[2898]: E0114 23:53:37.632794 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:53:39.624057 kubelet[2898]: E0114 23:53:39.623915 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:53:29Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:53:29Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:53:29Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:53:29Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"ci-4515-1-0-n-1d3be4f164\": Patch \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164/status?timeout=10s\": context deadline exceeded" Jan 14 23:53:39.631608 kubelet[2898]: I0114 23:53:39.631580 2898 scope.go:117] "RemoveContainer" containerID="aafde8d02900817a0b01573049a86a4df715f2968da15578421d2621c947d83d" Jan 14 23:53:39.631925 kubelet[2898]: E0114 23:53:39.631896 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-hg526_tigera-operator(549af1a4-d10d-41a8-bd81-9ce05836d164)\"" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" Jan 14 23:53:40.635466 containerd[1695]: time="2026-01-14T23:53:40.635426089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 14 23:53:40.968003 containerd[1695]: time="2026-01-14T23:53:40.967666944Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:53:40.969997 containerd[1695]: time="2026-01-14T23:53:40.969965311Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 14 23:53:40.970061 containerd[1695]: time="2026-01-14T23:53:40.969995711Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 14 23:53:40.970187 kubelet[2898]: E0114 23:53:40.970152 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:53:40.970493 kubelet[2898]: E0114 23:53:40.970197 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 14 23:53:40.970493 kubelet[2898]: E0114 23:53:40.970312 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:c6dcdc7d9611441ca8bf87758bc85c38,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-x7cql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f94899ccb-pnwbr_calico-system(63f0b6ec-9977-4e0c-b6a6-80408e82ee47): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 14 23:53:40.972109 containerd[1695]: time="2026-01-14T23:53:40.972090158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 14 23:53:41.298801 containerd[1695]: time="2026-01-14T23:53:41.298749835Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:53:41.300386 containerd[1695]: time="2026-01-14T23:53:41.300331880Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 14 23:53:41.300444 containerd[1695]: time="2026-01-14T23:53:41.300382840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 14 23:53:41.300656 kubelet[2898]: E0114 23:53:41.300584 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:53:41.300716 kubelet[2898]: E0114 23:53:41.300670 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 14 23:53:41.300873 kubelet[2898]: E0114 23:53:41.300788 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-x7cql,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-7f94899ccb-pnwbr_calico-system(63f0b6ec-9977-4e0c-b6a6-80408e82ee47): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 14 23:53:41.302028 kubelet[2898]: E0114 23:53:41.301947 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:53:42.633067 kubelet[2898]: E0114 23:53:42.633007 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:53:45.632428 kubelet[2898]: E0114 23:53:45.632362 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:53:46.899561 kubelet[2898]: E0114 23:53:46.899510 2898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": context deadline exceeded" interval="3.2s" Jan 14 23:53:47.632923 containerd[1695]: time="2026-01-14T23:53:47.632845465Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 14 23:53:47.953939 containerd[1695]: time="2026-01-14T23:53:47.953705845Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:53:47.954927 containerd[1695]: time="2026-01-14T23:53:47.954815888Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 14 23:53:47.954927 containerd[1695]: time="2026-01-14T23:53:47.954884769Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 14 23:53:47.955104 kubelet[2898]: E0114 23:53:47.955034 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:53:47.955546 kubelet[2898]: E0114 23:53:47.955113 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 14 23:53:47.955546 kubelet[2898]: E0114 23:53:47.955484 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-h289k,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5sxpk_calico-system(fcec49c5-6358-46d9-9922-8a81fb4bafd8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 14 23:53:47.955685 containerd[1695]: time="2026-01-14T23:53:47.955427050Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 14 23:53:47.956728 kubelet[2898]: E0114 23:53:47.956698 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:53:48.288488 containerd[1695]: time="2026-01-14T23:53:48.288375747Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:53:48.289613 containerd[1695]: time="2026-01-14T23:53:48.289563511Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 14 23:53:48.289678 containerd[1695]: time="2026-01-14T23:53:48.289645951Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 14 23:53:48.289882 kubelet[2898]: E0114 23:53:48.289823 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:53:48.289882 kubelet[2898]: E0114 23:53:48.289876 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 14 23:53:48.290050 kubelet[2898]: E0114 23:53:48.289993 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wx4j2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2lqxs_calico-system(5c454d6a-8fe3-46dd-a39b-d216b7be481d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 14 23:53:48.291902 containerd[1695]: time="2026-01-14T23:53:48.291867158Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 14 23:53:48.617575 containerd[1695]: time="2026-01-14T23:53:48.617437593Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:53:48.618616 containerd[1695]: time="2026-01-14T23:53:48.618575396Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 14 23:53:48.618702 containerd[1695]: time="2026-01-14T23:53:48.618610916Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 14 23:53:48.618857 kubelet[2898]: E0114 23:53:48.618803 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:53:48.618919 kubelet[2898]: E0114 23:53:48.618856 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 14 23:53:48.619012 kubelet[2898]: E0114 23:53:48.618964 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-wx4j2,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-2lqxs_calico-system(5c454d6a-8fe3-46dd-a39b-d216b7be481d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 14 23:53:48.620987 kubelet[2898]: E0114 23:53:48.620924 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:53:49.625230 kubelet[2898]: E0114 23:53:49.625022 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": context deadline exceeded" Jan 14 23:53:49.625799 kubelet[2898]: E0114 23:53:49.625756 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:53:49.626015 kubelet[2898]: E0114 23:53:49.625987 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:53:49.626228 kubelet[2898]: E0114 23:53:49.626195 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:53:49.626228 kubelet[2898]: E0114 23:53:49.626222 2898 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Jan 14 23:53:50.101100 kubelet[2898]: E0114 23:53:50.100976 2898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" interval="6.4s" Jan 14 23:53:50.632774 containerd[1695]: time="2026-01-14T23:53:50.632453468Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:53:50.960869 containerd[1695]: time="2026-01-14T23:53:50.960591230Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:53:50.962832 containerd[1695]: time="2026-01-14T23:53:50.962777957Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:53:50.962935 containerd[1695]: time="2026-01-14T23:53:50.962853277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:53:50.963081 kubelet[2898]: E0114 23:53:50.963016 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:53:50.963081 kubelet[2898]: E0114 23:53:50.963073 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:53:50.963683 kubelet[2898]: E0114 23:53:50.963192 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tfj59,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b767987c5-49kdx_calico-apiserver(5eca9ff5-ed57-4795-b82c-c2e2b81c8474): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:53:50.964407 kubelet[2898]: E0114 23:53:50.964363 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:53:52.631907 kubelet[2898]: I0114 23:53:52.631865 2898 scope.go:117] "RemoveContainer" containerID="aafde8d02900817a0b01573049a86a4df715f2968da15578421d2621c947d83d" Jan 14 23:53:52.632368 kubelet[2898]: E0114 23:53:52.632031 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-hg526_tigera-operator(549af1a4-d10d-41a8-bd81-9ce05836d164)\"" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" Jan 14 23:53:53.912568 containerd[1695]: time="2026-01-14T23:53:53.912455768Z" level=info msg="Kill container \"db5c219f9a2c0d3e9646e089d1e926b2f9aba6880c56630dad6061209602f04a\"" Jan 14 23:53:53.925029 kubelet[2898]: E0114 23:53:53.924917 2898 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/events/goldmane-666569f655-5sxpk.188abdce32f1f5c3\": http2: server sent GOAWAY and closed the connection; LastStreamID=1327, ErrCode=NO_ERROR, debug=\"\"" event="&Event{ObjectMeta:{goldmane-666569f655-5sxpk.188abdce32f1f5c3 calico-system 1958 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:goldmane-666569f655-5sxpk,UID:fcec49c5-6358-46d9-9922-8a81fb4bafd8,APIVersion:v1,ResourceVersion:1054,FieldPath:spec.containers{goldmane},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4515-1-0-n-1d3be4f164,},FirstTimestamp:2026-01-14 23:48:17 +0000 UTC,LastTimestamp:2026-01-14 23:52:09.632468853 +0000 UTC m=+411.082007407,Count:15,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-n-1d3be4f164,}" Jan 14 23:53:53.925577 systemd[1]: cri-containerd-db5c219f9a2c0d3e9646e089d1e926b2f9aba6880c56630dad6061209602f04a.scope: Deactivated successfully. Jan 14 23:53:53.926161 systemd[1]: cri-containerd-db5c219f9a2c0d3e9646e089d1e926b2f9aba6880c56630dad6061209602f04a.scope: Consumed 3min 1.161s CPU time, 453.2M memory peak. Jan 14 23:53:53.926000 audit: BPF prog-id=283 op=LOAD Jan 14 23:53:53.926000 audit: BPF prog-id=83 op=UNLOAD Jan 14 23:53:53.930131 kernel: audit: type=1334 audit(1768434833.926:803): prog-id=283 op=LOAD Jan 14 23:53:53.930197 kernel: audit: type=1334 audit(1768434833.926:804): prog-id=83 op=UNLOAD Jan 14 23:53:53.930819 containerd[1695]: time="2026-01-14T23:53:53.930780304Z" level=info msg="received container exit event container_id:\"db5c219f9a2c0d3e9646e089d1e926b2f9aba6880c56630dad6061209602f04a\" id:\"db5c219f9a2c0d3e9646e089d1e926b2f9aba6880c56630dad6061209602f04a\" pid:2728 exit_status:137 exited_at:{seconds:1768434833 nanos:930432743}" Jan 14 23:53:53.953449 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-db5c219f9a2c0d3e9646e089d1e926b2f9aba6880c56630dad6061209602f04a-rootfs.mount: Deactivated successfully. Jan 14 23:53:53.967805 containerd[1695]: time="2026-01-14T23:53:53.967761537Z" level=info msg="StopContainer for \"db5c219f9a2c0d3e9646e089d1e926b2f9aba6880c56630dad6061209602f04a\" returns successfully" Jan 14 23:53:53.969837 containerd[1695]: time="2026-01-14T23:53:53.969799743Z" level=info msg="CreateContainer within sandbox \"73955e808bb581e4bdc7aa2d5959ccdb604f9ee318c1f32b7c04db31e6ab18b5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:1,}" Jan 14 23:53:53.979482 containerd[1695]: time="2026-01-14T23:53:53.979448212Z" level=info msg="Container 72d88b39c6dbf2883d548cec131c43f0518f0914e3089c4af006e025e5d1dbcc: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:53:53.989139 containerd[1695]: time="2026-01-14T23:53:53.989006402Z" level=info msg="CreateContainer within sandbox \"73955e808bb581e4bdc7aa2d5959ccdb604f9ee318c1f32b7c04db31e6ab18b5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:1,} returns container id \"72d88b39c6dbf2883d548cec131c43f0518f0914e3089c4af006e025e5d1dbcc\"" Jan 14 23:53:53.989525 containerd[1695]: time="2026-01-14T23:53:53.989499763Z" level=info msg="StartContainer for \"72d88b39c6dbf2883d548cec131c43f0518f0914e3089c4af006e025e5d1dbcc\"" Jan 14 23:53:53.990814 containerd[1695]: time="2026-01-14T23:53:53.990780887Z" level=info msg="connecting to shim 72d88b39c6dbf2883d548cec131c43f0518f0914e3089c4af006e025e5d1dbcc" address="unix:///run/containerd/s/7e84ac729b0b3ea71e7a94c01c77c659ed3788ac652f51d62699bc5cf53d0528" protocol=ttrpc version=3 Jan 14 23:53:54.019528 systemd[1]: Started cri-containerd-72d88b39c6dbf2883d548cec131c43f0518f0914e3089c4af006e025e5d1dbcc.scope - libcontainer container 72d88b39c6dbf2883d548cec131c43f0518f0914e3089c4af006e025e5d1dbcc. Jan 14 23:53:54.030000 audit: BPF prog-id=284 op=LOAD Jan 14 23:53:54.037120 kernel: audit: type=1334 audit(1768434834.030:805): prog-id=284 op=LOAD Jan 14 23:53:54.037231 kernel: audit: type=1334 audit(1768434834.031:806): prog-id=285 op=LOAD Jan 14 23:53:54.037253 kernel: audit: type=1300 audit(1768434834.031:806): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=2573 pid=5815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:53:54.031000 audit: BPF prog-id=285 op=LOAD Jan 14 23:53:54.031000 audit[5815]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=2573 pid=5815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:53:54.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643838623339633664626632383833643534386365633133316334 Jan 14 23:53:54.041095 kernel: audit: type=1327 audit(1768434834.031:806): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643838623339633664626632383833643534386365633133316334 Jan 14 23:53:54.031000 audit: BPF prog-id=285 op=UNLOAD Jan 14 23:53:54.031000 audit[5815]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=5815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:53:54.042316 kernel: audit: type=1334 audit(1768434834.031:807): prog-id=285 op=UNLOAD Jan 14 23:53:54.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643838623339633664626632383833643534386365633133316334 Jan 14 23:53:54.049281 kernel: audit: type=1300 audit(1768434834.031:807): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=5815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:53:54.049367 kernel: audit: type=1327 audit(1768434834.031:807): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643838623339633664626632383833643534386365633133316334 Jan 14 23:53:54.031000 audit: BPF prog-id=286 op=LOAD Jan 14 23:53:54.050430 kernel: audit: type=1334 audit(1768434834.031:808): prog-id=286 op=LOAD Jan 14 23:53:54.031000 audit[5815]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=2573 pid=5815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:53:54.031000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643838623339633664626632383833643534386365633133316334 Jan 14 23:53:54.032000 audit: BPF prog-id=287 op=LOAD Jan 14 23:53:54.032000 audit[5815]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=2573 pid=5815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:53:54.032000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643838623339633664626632383833643534386365633133316334 Jan 14 23:53:54.035000 audit: BPF prog-id=287 op=UNLOAD Jan 14 23:53:54.035000 audit[5815]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=5815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:53:54.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643838623339633664626632383833643534386365633133316334 Jan 14 23:53:54.035000 audit: BPF prog-id=286 op=UNLOAD Jan 14 23:53:54.035000 audit[5815]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=5815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:53:54.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643838623339633664626632383833643534386365633133316334 Jan 14 23:53:54.035000 audit: BPF prog-id=288 op=LOAD Jan 14 23:53:54.035000 audit[5815]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=2573 pid=5815 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:53:54.035000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3732643838623339633664626632383833643534386365633133316334 Jan 14 23:53:54.069068 containerd[1695]: time="2026-01-14T23:53:54.069031126Z" level=info msg="StartContainer for \"72d88b39c6dbf2883d548cec131c43f0518f0914e3089c4af006e025e5d1dbcc\" returns successfully" Jan 14 23:53:55.632298 kubelet[2898]: E0114 23:53:55.632214 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:53:55.632833 kubelet[2898]: E0114 23:53:55.632657 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:53:59.632436 kubelet[2898]: E0114 23:53:59.632391 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:54:00.634053 containerd[1695]: time="2026-01-14T23:54:00.633955021Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 14 23:54:00.963991 containerd[1695]: time="2026-01-14T23:54:00.963631228Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:54:00.965863 containerd[1695]: time="2026-01-14T23:54:00.965824834Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 14 23:54:00.966028 containerd[1695]: time="2026-01-14T23:54:00.965893955Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 14 23:54:00.966597 kubelet[2898]: E0114 23:54:00.966369 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:54:00.966597 kubelet[2898]: E0114 23:54:00.966421 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 14 23:54:00.966597 kubelet[2898]: E0114 23:54:00.966540 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m5dng,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-5b767987c5-2glxx_calico-apiserver(300b5f0b-ed7c-4a04-a4b8-68a71ea25297): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 14 23:54:00.967714 kubelet[2898]: E0114 23:54:00.967682 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:54:02.632743 kubelet[2898]: E0114 23:54:02.632677 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:54:02.633114 kubelet[2898]: E0114 23:54:02.632752 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:54:04.217604 kubelet[2898]: E0114 23:54:04.217480 2898 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/events/goldmane-666569f655-5sxpk.188abdce32f1f5c3\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{goldmane-666569f655-5sxpk.188abdce32f1f5c3 calico-system 1958 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:goldmane-666569f655-5sxpk,UID:fcec49c5-6358-46d9-9922-8a81fb4bafd8,APIVersion:v1,ResourceVersion:1054,FieldPath:spec.containers{goldmane},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4515-1-0-n-1d3be4f164,},FirstTimestamp:2026-01-14 23:48:17 +0000 UTC,LastTimestamp:2026-01-14 23:52:09.632468853 +0000 UTC m=+411.082007407,Count:15,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-n-1d3be4f164,}" Jan 14 23:54:04.862108 kernel: kauditd_printk_skb: 14 callbacks suppressed Jan 14 23:54:04.862394 kernel: audit: type=1334 audit(1768434844.859:813): prog-id=98 op=UNLOAD Jan 14 23:54:04.862473 kernel: audit: type=1334 audit(1768434844.859:814): prog-id=102 op=UNLOAD Jan 14 23:54:04.859000 audit: BPF prog-id=98 op=UNLOAD Jan 14 23:54:04.859000 audit: BPF prog-id=102 op=UNLOAD Jan 14 23:54:04.928107 kubelet[2898]: I0114 23:54:04.928059 2898 status_manager.go:890] "Failed to get status for pod" podUID="a0fb221571f90a6b03ac373000837dfe" pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-4515-1-0-n-1d3be4f164\": net/http: TLS handshake timeout - error from a previous attempt: http2: server sent GOAWAY and closed the connection; LastStreamID=1327, ErrCode=NO_ERROR, debug=\"\"" Jan 14 23:54:05.632070 kubelet[2898]: I0114 23:54:05.632029 2898 scope.go:117] "RemoveContainer" containerID="aafde8d02900817a0b01573049a86a4df715f2968da15578421d2621c947d83d" Jan 14 23:54:05.632438 kubelet[2898]: E0114 23:54:05.632188 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"tigera-operator\" with CrashLoopBackOff: \"back-off 40s restarting failed container=tigera-operator pod=tigera-operator-7dcd859c48-hg526_tigera-operator(549af1a4-d10d-41a8-bd81-9ce05836d164)\"" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" Jan 14 23:54:06.502440 kubelet[2898]: E0114 23:54:06.502378 2898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" interval="7s" Jan 14 23:54:08.633045 containerd[1695]: time="2026-01-14T23:54:08.633007056Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 14 23:54:09.308723 containerd[1695]: time="2026-01-14T23:54:09.308382959Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 14 23:54:09.486130 containerd[1695]: time="2026-01-14T23:54:09.486072142Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 14 23:54:09.486879 kubelet[2898]: E0114 23:54:09.486616 2898 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:54:09.486879 kubelet[2898]: E0114 23:54:09.486665 2898 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 14 23:54:09.486879 kubelet[2898]: E0114 23:54:09.486802 2898 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-gb4q4,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7cd9b5689c-544p6_calico-system(2d307ca4-cd62-4987-b2dc-ed6b76a2794e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 14 23:54:09.487305 containerd[1695]: time="2026-01-14T23:54:09.486142902Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 14 23:54:09.487986 kubelet[2898]: E0114 23:54:09.487953 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:54:09.632671 kubelet[2898]: E0114 23:54:09.632521 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:54:09.912772 kubelet[2898]: E0114 23:54:09.912492 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:53:59Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:53:59Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:53:59Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:53:59Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"ci-4515-1-0-n-1d3be4f164\": Patch \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164/status?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 14 23:54:11.633077 kubelet[2898]: E0114 23:54:11.633006 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:54:14.632391 kubelet[2898]: E0114 23:54:14.632340 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:54:14.632391 kubelet[2898]: E0114 23:54:14.632342 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:54:14.929677 kubelet[2898]: I0114 23:54:14.929494 2898 status_manager.go:890] "Failed to get status for pod" podUID="2600c830ca674ed87b79b96ba000ed32" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-4515-1-0-n-1d3be4f164\": net/http: TLS handshake timeout" Jan 14 23:54:15.151059 systemd[1]: cri-containerd-72d88b39c6dbf2883d548cec131c43f0518f0914e3089c4af006e025e5d1dbcc.scope: Deactivated successfully. Jan 14 23:54:15.151546 kubelet[2898]: E0114 23:54:15.150994 2898 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/events/goldmane-666569f655-5sxpk.188abdce32f1f5c3\": read tcp 10.0.22.230:34290->10.0.22.230:6443: read: connection reset by peer" event="&Event{ObjectMeta:{goldmane-666569f655-5sxpk.188abdce32f1f5c3 calico-system 1958 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:goldmane-666569f655-5sxpk,UID:fcec49c5-6358-46d9-9922-8a81fb4bafd8,APIVersion:v1,ResourceVersion:1054,FieldPath:spec.containers{goldmane},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4515-1-0-n-1d3be4f164,},FirstTimestamp:2026-01-14 23:48:17 +0000 UTC,LastTimestamp:2026-01-14 23:52:09.632468853 +0000 UTC m=+411.082007407,Count:15,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-n-1d3be4f164,}" Jan 14 23:54:15.151409 systemd[1]: cri-containerd-72d88b39c6dbf2883d548cec131c43f0518f0914e3089c4af006e025e5d1dbcc.scope: Consumed 1.178s CPU time, 24.1M memory peak. Jan 14 23:54:15.152814 containerd[1695]: time="2026-01-14T23:54:15.152782653Z" level=info msg="received container exit event container_id:\"72d88b39c6dbf2883d548cec131c43f0518f0914e3089c4af006e025e5d1dbcc\" id:\"72d88b39c6dbf2883d548cec131c43f0518f0914e3089c4af006e025e5d1dbcc\" pid:5829 exit_status:255 exited_at:{seconds:1768434855 nanos:152561212}" Jan 14 23:54:15.156000 audit: BPF prog-id=284 op=UNLOAD Jan 14 23:54:15.156000 audit: BPF prog-id=288 op=UNLOAD Jan 14 23:54:15.160157 kernel: audit: type=1334 audit(1768434855.156:815): prog-id=284 op=UNLOAD Jan 14 23:54:15.160215 kernel: audit: type=1334 audit(1768434855.156:816): prog-id=288 op=UNLOAD Jan 14 23:54:15.174961 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-72d88b39c6dbf2883d548cec131c43f0518f0914e3089c4af006e025e5d1dbcc-rootfs.mount: Deactivated successfully. Jan 14 23:54:15.632286 kubelet[2898]: E0114 23:54:15.632223 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:54:15.810091 kubelet[2898]: I0114 23:54:15.809579 2898 scope.go:117] "RemoveContainer" containerID="db5c219f9a2c0d3e9646e089d1e926b2f9aba6880c56630dad6061209602f04a" Jan 14 23:54:15.810091 kubelet[2898]: I0114 23:54:15.809861 2898 scope.go:117] "RemoveContainer" containerID="72d88b39c6dbf2883d548cec131c43f0518f0914e3089c4af006e025e5d1dbcc" Jan 14 23:54:15.811785 containerd[1695]: time="2026-01-14T23:54:15.811745186Z" level=info msg="CreateContainer within sandbox \"73955e808bb581e4bdc7aa2d5959ccdb604f9ee318c1f32b7c04db31e6ab18b5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:2,}" Jan 14 23:54:15.812066 containerd[1695]: time="2026-01-14T23:54:15.812040307Z" level=info msg="RemoveContainer for \"db5c219f9a2c0d3e9646e089d1e926b2f9aba6880c56630dad6061209602f04a\"" Jan 14 23:54:15.817830 containerd[1695]: time="2026-01-14T23:54:15.817776484Z" level=info msg="RemoveContainer for \"db5c219f9a2c0d3e9646e089d1e926b2f9aba6880c56630dad6061209602f04a\" returns successfully" Jan 14 23:54:15.821921 containerd[1695]: time="2026-01-14T23:54:15.821877017Z" level=info msg="Container 1d56440cf56cfb480fa81aedb9d10219e3a3cbab9d8171877eb78e395a06d5cc: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:54:15.829511 containerd[1695]: time="2026-01-14T23:54:15.829468560Z" level=info msg="CreateContainer within sandbox \"73955e808bb581e4bdc7aa2d5959ccdb604f9ee318c1f32b7c04db31e6ab18b5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:2,} returns container id \"1d56440cf56cfb480fa81aedb9d10219e3a3cbab9d8171877eb78e395a06d5cc\"" Jan 14 23:54:15.830115 containerd[1695]: time="2026-01-14T23:54:15.830093562Z" level=info msg="StartContainer for \"1d56440cf56cfb480fa81aedb9d10219e3a3cbab9d8171877eb78e395a06d5cc\"" Jan 14 23:54:15.831523 containerd[1695]: time="2026-01-14T23:54:15.831492766Z" level=info msg="connecting to shim 1d56440cf56cfb480fa81aedb9d10219e3a3cbab9d8171877eb78e395a06d5cc" address="unix:///run/containerd/s/7e84ac729b0b3ea71e7a94c01c77c659ed3788ac652f51d62699bc5cf53d0528" protocol=ttrpc version=3 Jan 14 23:54:15.850440 systemd[1]: Started cri-containerd-1d56440cf56cfb480fa81aedb9d10219e3a3cbab9d8171877eb78e395a06d5cc.scope - libcontainer container 1d56440cf56cfb480fa81aedb9d10219e3a3cbab9d8171877eb78e395a06d5cc. Jan 14 23:54:15.861000 audit: BPF prog-id=289 op=LOAD Jan 14 23:54:15.863276 kernel: audit: type=1334 audit(1768434855.861:817): prog-id=289 op=LOAD Jan 14 23:54:15.863329 kernel: audit: type=1334 audit(1768434855.861:818): prog-id=290 op=LOAD Jan 14 23:54:15.861000 audit: BPF prog-id=290 op=LOAD Jan 14 23:54:15.861000 audit[5898]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=2573 pid=5898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:54:15.867575 kernel: audit: type=1300 audit(1768434855.861:818): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=2573 pid=5898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:54:15.867700 kernel: audit: type=1327 audit(1768434855.861:818): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164353634343063663536636662343830666138316165646239643130 Jan 14 23:54:15.861000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164353634343063663536636662343830666138316165646239643130 Jan 14 23:54:15.862000 audit: BPF prog-id=290 op=UNLOAD Jan 14 23:54:15.871914 kernel: audit: type=1334 audit(1768434855.862:819): prog-id=290 op=UNLOAD Jan 14 23:54:15.871974 kernel: audit: type=1300 audit(1768434855.862:819): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=5898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:54:15.862000 audit[5898]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=5898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:54:15.875346 kernel: audit: type=1327 audit(1768434855.862:819): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164353634343063663536636662343830666138316165646239643130 Jan 14 23:54:15.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164353634343063663536636662343830666138316165646239643130 Jan 14 23:54:15.862000 audit: BPF prog-id=291 op=LOAD Jan 14 23:54:15.878824 kernel: audit: type=1334 audit(1768434855.862:820): prog-id=291 op=LOAD Jan 14 23:54:15.862000 audit[5898]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=2573 pid=5898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:54:15.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164353634343063663536636662343830666138316165646239643130 Jan 14 23:54:15.862000 audit: BPF prog-id=292 op=LOAD Jan 14 23:54:15.862000 audit[5898]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=2573 pid=5898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:54:15.862000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164353634343063663536636662343830666138316165646239643130 Jan 14 23:54:15.866000 audit: BPF prog-id=292 op=UNLOAD Jan 14 23:54:15.866000 audit[5898]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=5898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:54:15.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164353634343063663536636662343830666138316165646239643130 Jan 14 23:54:15.866000 audit: BPF prog-id=291 op=UNLOAD Jan 14 23:54:15.866000 audit[5898]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=5898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:54:15.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164353634343063663536636662343830666138316165646239643130 Jan 14 23:54:15.866000 audit: BPF prog-id=293 op=LOAD Jan 14 23:54:15.866000 audit[5898]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=2573 pid=5898 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:54:15.866000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3164353634343063663536636662343830666138316165646239643130 Jan 14 23:54:15.899917 containerd[1695]: time="2026-01-14T23:54:15.899811055Z" level=info msg="StartContainer for \"1d56440cf56cfb480fa81aedb9d10219e3a3cbab9d8171877eb78e395a06d5cc\" returns successfully" Jan 14 23:54:18.632374 kubelet[2898]: I0114 23:54:18.631927 2898 scope.go:117] "RemoveContainer" containerID="aafde8d02900817a0b01573049a86a4df715f2968da15578421d2621c947d83d" Jan 14 23:54:18.635096 containerd[1695]: time="2026-01-14T23:54:18.635063090Z" level=info msg="CreateContainer within sandbox \"eb18736f7ce953dd0b89bc3f4370648bbfc6543fef46921f2d98a108f4465ef7\" for container &ContainerMetadata{Name:tigera-operator,Attempt:4,}" Jan 14 23:54:18.644285 containerd[1695]: time="2026-01-14T23:54:18.644071358Z" level=info msg="Container 159414a62d3a2af056d22c8c5c2d8f15b07d4a7f2af94438aac7250018f0fa6a: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:54:18.651731 containerd[1695]: time="2026-01-14T23:54:18.651679021Z" level=info msg="CreateContainer within sandbox \"eb18736f7ce953dd0b89bc3f4370648bbfc6543fef46921f2d98a108f4465ef7\" for &ContainerMetadata{Name:tigera-operator,Attempt:4,} returns container id \"159414a62d3a2af056d22c8c5c2d8f15b07d4a7f2af94438aac7250018f0fa6a\"" Jan 14 23:54:18.652348 containerd[1695]: time="2026-01-14T23:54:18.652134383Z" level=info msg="StartContainer for \"159414a62d3a2af056d22c8c5c2d8f15b07d4a7f2af94438aac7250018f0fa6a\"" Jan 14 23:54:18.653113 containerd[1695]: time="2026-01-14T23:54:18.653081345Z" level=info msg="connecting to shim 159414a62d3a2af056d22c8c5c2d8f15b07d4a7f2af94438aac7250018f0fa6a" address="unix:///run/containerd/s/a973a0639db4dd5604a63ce03369335768b2c82b62b1d698552f1c66bd9bf38c" protocol=ttrpc version=3 Jan 14 23:54:18.680543 systemd[1]: Started cri-containerd-159414a62d3a2af056d22c8c5c2d8f15b07d4a7f2af94438aac7250018f0fa6a.scope - libcontainer container 159414a62d3a2af056d22c8c5c2d8f15b07d4a7f2af94438aac7250018f0fa6a. Jan 14 23:54:18.690000 audit: BPF prog-id=294 op=LOAD Jan 14 23:54:18.690000 audit: BPF prog-id=295 op=LOAD Jan 14 23:54:18.690000 audit[5935]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=3039 pid=5935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:54:18.690000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135393431346136326433613261663035366432326338633563326438 Jan 14 23:54:18.690000 audit: BPF prog-id=295 op=UNLOAD Jan 14 23:54:18.690000 audit[5935]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3039 pid=5935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:54:18.690000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135393431346136326433613261663035366432326338633563326438 Jan 14 23:54:18.690000 audit: BPF prog-id=296 op=LOAD Jan 14 23:54:18.690000 audit[5935]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=3039 pid=5935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:54:18.690000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135393431346136326433613261663035366432326338633563326438 Jan 14 23:54:18.690000 audit: BPF prog-id=297 op=LOAD Jan 14 23:54:18.690000 audit[5935]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=3039 pid=5935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:54:18.690000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135393431346136326433613261663035366432326338633563326438 Jan 14 23:54:18.691000 audit: BPF prog-id=297 op=UNLOAD Jan 14 23:54:18.691000 audit[5935]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3039 pid=5935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:54:18.691000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135393431346136326433613261663035366432326338633563326438 Jan 14 23:54:18.691000 audit: BPF prog-id=296 op=UNLOAD Jan 14 23:54:18.691000 audit[5935]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3039 pid=5935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:54:18.691000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135393431346136326433613261663035366432326338633563326438 Jan 14 23:54:18.691000 audit: BPF prog-id=298 op=LOAD Jan 14 23:54:18.691000 audit[5935]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=3039 pid=5935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:54:18.691000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3135393431346136326433613261663035366432326338633563326438 Jan 14 23:54:18.708720 containerd[1695]: time="2026-01-14T23:54:18.708668995Z" level=info msg="StartContainer for \"159414a62d3a2af056d22c8c5c2d8f15b07d4a7f2af94438aac7250018f0fa6a\" returns successfully" Jan 14 23:54:19.913223 kubelet[2898]: E0114 23:54:19.913131 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": context deadline exceeded - error from a previous attempt: read tcp 10.0.22.230:34254->10.0.22.230:6443: read: connection reset by peer" Jan 14 23:54:22.632739 kubelet[2898]: E0114 23:54:22.632680 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:54:23.503928 kubelet[2898]: E0114 23:54:23.503833 2898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": context deadline exceeded - error from a previous attempt: read tcp 10.0.22.230:34278->10.0.22.230:6443: read: connection reset by peer" interval="7s" Jan 14 23:54:23.632698 kubelet[2898]: E0114 23:54:23.632614 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:54:26.153028 kubelet[2898]: I0114 23:54:26.152906 2898 status_manager.go:890] "Failed to get status for pod" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-7dcd859c48-hg526\": net/http: TLS handshake timeout - error from a previous attempt: read tcp 10.0.22.230:34296->10.0.22.230:6443: read: connection reset by peer" Jan 14 23:54:26.633044 kubelet[2898]: E0114 23:54:26.632880 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:54:26.633044 kubelet[2898]: E0114 23:54:26.632885 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:54:26.633248 kubelet[2898]: E0114 23:54:26.633097 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:54:27.632413 kubelet[2898]: E0114 23:54:27.632358 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:54:29.914329 kubelet[2898]: E0114 23:54:29.913553 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": context deadline exceeded" Jan 14 23:54:35.153902 kubelet[2898]: E0114 23:54:35.153682 2898 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/events/goldmane-666569f655-5sxpk.188abdce32f1f5c3\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{goldmane-666569f655-5sxpk.188abdce32f1f5c3 calico-system 1958 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:goldmane-666569f655-5sxpk,UID:fcec49c5-6358-46d9-9922-8a81fb4bafd8,APIVersion:v1,ResourceVersion:1054,FieldPath:spec.containers{goldmane},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4515-1-0-n-1d3be4f164,},FirstTimestamp:2026-01-14 23:48:17 +0000 UTC,LastTimestamp:2026-01-14 23:52:09.632468853 +0000 UTC m=+411.082007407,Count:15,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-n-1d3be4f164,}" Jan 14 23:54:35.632810 kubelet[2898]: E0114 23:54:35.632758 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:54:35.633095 kubelet[2898]: E0114 23:54:35.632795 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:54:36.154488 kubelet[2898]: I0114 23:54:36.154392 2898 status_manager.go:890] "Failed to get status for pod" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-7cd9b5689c-544p6\": net/http: TLS handshake timeout" Jan 14 23:54:36.485428 systemd[1]: cri-containerd-1d56440cf56cfb480fa81aedb9d10219e3a3cbab9d8171877eb78e395a06d5cc.scope: Deactivated successfully. Jan 14 23:54:36.486646 containerd[1695]: time="2026-01-14T23:54:36.486513823Z" level=info msg="received container exit event container_id:\"1d56440cf56cfb480fa81aedb9d10219e3a3cbab9d8171877eb78e395a06d5cc\" id:\"1d56440cf56cfb480fa81aedb9d10219e3a3cbab9d8171877eb78e395a06d5cc\" pid:5911 exit_status:255 exited_at:{seconds:1768434876 nanos:486180462}" Jan 14 23:54:36.508802 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-1d56440cf56cfb480fa81aedb9d10219e3a3cbab9d8171877eb78e395a06d5cc-rootfs.mount: Deactivated successfully. Jan 14 23:54:36.864750 kubelet[2898]: I0114 23:54:36.864610 2898 scope.go:117] "RemoveContainer" containerID="72d88b39c6dbf2883d548cec131c43f0518f0914e3089c4af006e025e5d1dbcc" Jan 14 23:54:36.864961 kubelet[2898]: I0114 23:54:36.864906 2898 scope.go:117] "RemoveContainer" containerID="1d56440cf56cfb480fa81aedb9d10219e3a3cbab9d8171877eb78e395a06d5cc" Jan 14 23:54:36.865065 kubelet[2898]: E0114 23:54:36.865042 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-ci-4515-1-0-n-1d3be4f164_kube-system(0b87770b8d26d1b1663c3229f1382cec)\"" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" podUID="0b87770b8d26d1b1663c3229f1382cec" Jan 14 23:54:36.866620 containerd[1695]: time="2026-01-14T23:54:36.866450864Z" level=info msg="RemoveContainer for \"72d88b39c6dbf2883d548cec131c43f0518f0914e3089c4af006e025e5d1dbcc\"" Jan 14 23:54:36.872164 containerd[1695]: time="2026-01-14T23:54:36.872129961Z" level=info msg="RemoveContainer for \"72d88b39c6dbf2883d548cec131c43f0518f0914e3089c4af006e025e5d1dbcc\" returns successfully" Jan 14 23:54:37.488208 kubelet[2898]: E0114 23:54:37.487190 2898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused - error from a previous attempt: read tcp 10.0.22.230:55368->10.0.22.230:6443: read: connection reset by peer" interval="7s" Jan 14 23:54:37.488208 kubelet[2898]: E0114 23:54:37.487446 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused - error from a previous attempt: read tcp 10.0.22.230:55344->10.0.22.230:6443: read: connection reset by peer" Jan 14 23:54:37.488208 kubelet[2898]: E0114 23:54:37.487721 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:37.488208 kubelet[2898]: E0114 23:54:37.487742 2898 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Jan 14 23:54:37.489840 kubelet[2898]: I0114 23:54:37.489045 2898 status_manager.go:890] "Failed to get status for pod" podUID="2600c830ca674ed87b79b96ba000ed32" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused - error from a previous attempt: read tcp 10.0.22.230:55382->10.0.22.230:6443: read: connection reset by peer" Jan 14 23:54:37.489840 kubelet[2898]: I0114 23:54:37.489351 2898 status_manager.go:890] "Failed to get status for pod" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-7dcd859c48-hg526\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:37.490301 kubelet[2898]: I0114 23:54:37.490211 2898 status_manager.go:890] "Failed to get status for pod" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-2glxx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:37.492869 kubelet[2898]: I0114 23:54:37.492670 2898 status_manager.go:890] "Failed to get status for pod" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" pod="calico-system/goldmane-666569f655-5sxpk" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/goldmane-666569f655-5sxpk\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:37.493005 kubelet[2898]: I0114 23:54:37.492957 2898 status_manager.go:890] "Failed to get status for pod" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" pod="calico-system/whisker-7f94899ccb-pnwbr" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/whisker-7f94899ccb-pnwbr\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:37.493370 kubelet[2898]: I0114 23:54:37.493329 2898 status_manager.go:890] "Failed to get status for pod" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-49kdx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:37.495143 kubelet[2898]: I0114 23:54:37.493623 2898 status_manager.go:890] "Failed to get status for pod" podUID="0b87770b8d26d1b1663c3229f1382cec" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:37.495143 kubelet[2898]: I0114 23:54:37.493864 2898 status_manager.go:890] "Failed to get status for pod" podUID="a0fb221571f90a6b03ac373000837dfe" pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:37.495143 kubelet[2898]: I0114 23:54:37.494045 2898 status_manager.go:890] "Failed to get status for pod" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" pod="calico-system/csi-node-driver-2lqxs" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-2lqxs\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:37.495143 kubelet[2898]: I0114 23:54:37.494498 2898 status_manager.go:890] "Failed to get status for pod" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-49kdx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:37.495143 kubelet[2898]: I0114 23:54:37.494682 2898 status_manager.go:890] "Failed to get status for pod" podUID="0b87770b8d26d1b1663c3229f1382cec" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:37.495143 kubelet[2898]: I0114 23:54:37.494869 2898 status_manager.go:890] "Failed to get status for pod" podUID="a0fb221571f90a6b03ac373000837dfe" pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:37.495143 kubelet[2898]: I0114 23:54:37.495048 2898 status_manager.go:890] "Failed to get status for pod" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" pod="calico-system/csi-node-driver-2lqxs" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-2lqxs\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:37.495417 kubelet[2898]: I0114 23:54:37.495287 2898 status_manager.go:890] "Failed to get status for pod" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-2glxx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:37.495548 kubelet[2898]: I0114 23:54:37.495520 2898 status_manager.go:890] "Failed to get status for pod" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-7cd9b5689c-544p6\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:37.495862 kubelet[2898]: I0114 23:54:37.495835 2898 status_manager.go:890] "Failed to get status for pod" podUID="2600c830ca674ed87b79b96ba000ed32" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:37.496095 kubelet[2898]: I0114 23:54:37.496072 2898 status_manager.go:890] "Failed to get status for pod" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-7dcd859c48-hg526\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:37.496326 kubelet[2898]: I0114 23:54:37.496302 2898 status_manager.go:890] "Failed to get status for pod" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" pod="calico-system/goldmane-666569f655-5sxpk" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/goldmane-666569f655-5sxpk\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:37.496524 kubelet[2898]: I0114 23:54:37.496504 2898 status_manager.go:890] "Failed to get status for pod" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" pod="calico-system/whisker-7f94899ccb-pnwbr" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/whisker-7f94899ccb-pnwbr\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:37.497239 kubelet[2898]: I0114 23:54:37.496804 2898 status_manager.go:890] "Failed to get status for pod" podUID="a0fb221571f90a6b03ac373000837dfe" pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:37.497239 kubelet[2898]: I0114 23:54:37.497069 2898 status_manager.go:890] "Failed to get status for pod" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" pod="calico-system/csi-node-driver-2lqxs" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-2lqxs\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:37.497368 kubelet[2898]: I0114 23:54:37.497320 2898 status_manager.go:890] "Failed to get status for pod" podUID="2600c830ca674ed87b79b96ba000ed32" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:37.497629 kubelet[2898]: I0114 23:54:37.497602 2898 status_manager.go:890] "Failed to get status for pod" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-7dcd859c48-hg526\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:37.497962 kubelet[2898]: I0114 23:54:37.497933 2898 status_manager.go:890] "Failed to get status for pod" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-2glxx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:37.498309 kubelet[2898]: I0114 23:54:37.498261 2898 status_manager.go:890] "Failed to get status for pod" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-7cd9b5689c-544p6\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:37.498651 kubelet[2898]: I0114 23:54:37.498621 2898 status_manager.go:890] "Failed to get status for pod" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" pod="calico-system/goldmane-666569f655-5sxpk" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/goldmane-666569f655-5sxpk\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:37.498975 kubelet[2898]: I0114 23:54:37.498945 2898 status_manager.go:890] "Failed to get status for pod" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" pod="calico-system/whisker-7f94899ccb-pnwbr" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/whisker-7f94899ccb-pnwbr\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:37.499361 kubelet[2898]: I0114 23:54:37.499329 2898 status_manager.go:890] "Failed to get status for pod" podUID="0b87770b8d26d1b1663c3229f1382cec" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:37.499639 kubelet[2898]: I0114 23:54:37.499604 2898 status_manager.go:890] "Failed to get status for pod" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-49kdx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:38.632208 kubelet[2898]: I0114 23:54:38.632160 2898 status_manager.go:890] "Failed to get status for pod" podUID="0b87770b8d26d1b1663c3229f1382cec" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:38.633038 kubelet[2898]: I0114 23:54:38.633005 2898 status_manager.go:890] "Failed to get status for pod" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-49kdx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:38.633115 kubelet[2898]: E0114 23:54:38.633056 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:54:38.633515 kubelet[2898]: I0114 23:54:38.633450 2898 status_manager.go:890] "Failed to get status for pod" podUID="a0fb221571f90a6b03ac373000837dfe" pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:38.633816 kubelet[2898]: I0114 23:54:38.633790 2898 status_manager.go:890] "Failed to get status for pod" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" pod="calico-system/csi-node-driver-2lqxs" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-2lqxs\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:38.634088 kubelet[2898]: I0114 23:54:38.634041 2898 status_manager.go:890] "Failed to get status for pod" podUID="2600c830ca674ed87b79b96ba000ed32" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:38.634354 kubelet[2898]: I0114 23:54:38.634330 2898 status_manager.go:890] "Failed to get status for pod" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-7dcd859c48-hg526\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:38.634574 kubelet[2898]: I0114 23:54:38.634540 2898 status_manager.go:890] "Failed to get status for pod" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-2glxx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:38.634784 kubelet[2898]: I0114 23:54:38.634760 2898 status_manager.go:890] "Failed to get status for pod" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-7cd9b5689c-544p6\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:38.634979 kubelet[2898]: I0114 23:54:38.634956 2898 status_manager.go:890] "Failed to get status for pod" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" pod="calico-system/goldmane-666569f655-5sxpk" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/goldmane-666569f655-5sxpk\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:38.635148 kubelet[2898]: I0114 23:54:38.635129 2898 status_manager.go:890] "Failed to get status for pod" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" pod="calico-system/whisker-7f94899ccb-pnwbr" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/whisker-7f94899ccb-pnwbr\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:38.715000 audit: BPF prog-id=289 op=UNLOAD Jan 14 23:54:38.717516 kernel: kauditd_printk_skb: 36 callbacks suppressed Jan 14 23:54:38.717576 kernel: audit: type=1334 audit(1768434878.715:833): prog-id=289 op=UNLOAD Jan 14 23:54:38.716000 audit: BPF prog-id=293 op=UNLOAD Jan 14 23:54:38.719016 kernel: audit: type=1334 audit(1768434878.716:834): prog-id=293 op=UNLOAD Jan 14 23:54:39.632508 kubelet[2898]: E0114 23:54:39.632455 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:54:39.887047 kubelet[2898]: I0114 23:54:39.886840 2898 scope.go:117] "RemoveContainer" containerID="1d56440cf56cfb480fa81aedb9d10219e3a3cbab9d8171877eb78e395a06d5cc" Jan 14 23:54:39.887047 kubelet[2898]: E0114 23:54:39.886998 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-ci-4515-1-0-n-1d3be4f164_kube-system(0b87770b8d26d1b1663c3229f1382cec)\"" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" podUID="0b87770b8d26d1b1663c3229f1382cec" Jan 14 23:54:39.887428 kubelet[2898]: I0114 23:54:39.887401 2898 status_manager.go:890] "Failed to get status for pod" podUID="2600c830ca674ed87b79b96ba000ed32" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:39.887709 kubelet[2898]: I0114 23:54:39.887657 2898 status_manager.go:890] "Failed to get status for pod" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-7dcd859c48-hg526\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:39.887911 kubelet[2898]: I0114 23:54:39.887886 2898 status_manager.go:890] "Failed to get status for pod" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-2glxx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:39.888083 kubelet[2898]: I0114 23:54:39.888063 2898 status_manager.go:890] "Failed to get status for pod" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-7cd9b5689c-544p6\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:39.888246 kubelet[2898]: I0114 23:54:39.888223 2898 status_manager.go:890] "Failed to get status for pod" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" pod="calico-system/goldmane-666569f655-5sxpk" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/goldmane-666569f655-5sxpk\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:39.888476 kubelet[2898]: I0114 23:54:39.888448 2898 status_manager.go:890] "Failed to get status for pod" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" pod="calico-system/whisker-7f94899ccb-pnwbr" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/whisker-7f94899ccb-pnwbr\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:39.888934 kubelet[2898]: I0114 23:54:39.888740 2898 status_manager.go:890] "Failed to get status for pod" podUID="0b87770b8d26d1b1663c3229f1382cec" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:39.889013 kubelet[2898]: I0114 23:54:39.888987 2898 status_manager.go:890] "Failed to get status for pod" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-49kdx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:39.889195 kubelet[2898]: I0114 23:54:39.889174 2898 status_manager.go:890] "Failed to get status for pod" podUID="a0fb221571f90a6b03ac373000837dfe" pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:39.889426 kubelet[2898]: I0114 23:54:39.889392 2898 status_manager.go:890] "Failed to get status for pod" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" pod="calico-system/csi-node-driver-2lqxs" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-2lqxs\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:40.632728 kubelet[2898]: E0114 23:54:40.632571 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:54:41.895700 kubelet[2898]: I0114 23:54:41.895657 2898 scope.go:117] "RemoveContainer" containerID="1d56440cf56cfb480fa81aedb9d10219e3a3cbab9d8171877eb78e395a06d5cc" Jan 14 23:54:41.896070 kubelet[2898]: E0114 23:54:41.895814 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver pod=kube-apiserver-ci-4515-1-0-n-1d3be4f164_kube-system(0b87770b8d26d1b1663c3229f1382cec)\"" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" podUID="0b87770b8d26d1b1663c3229f1382cec" Jan 14 23:54:42.632301 kubelet[2898]: E0114 23:54:42.632222 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:54:44.488157 kubelet[2898]: E0114 23:54:44.488113 2898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" interval="7s" Jan 14 23:54:45.155443 kubelet[2898]: E0114 23:54:45.155321 2898 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/events/goldmane-666569f655-5sxpk.188abdce32f1f5c3\": dial tcp 10.0.22.230:6443: connect: connection refused" event="&Event{ObjectMeta:{goldmane-666569f655-5sxpk.188abdce32f1f5c3 calico-system 1958 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:goldmane-666569f655-5sxpk,UID:fcec49c5-6358-46d9-9922-8a81fb4bafd8,APIVersion:v1,ResourceVersion:1054,FieldPath:spec.containers{goldmane},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4515-1-0-n-1d3be4f164,},FirstTimestamp:2026-01-14 23:48:17 +0000 UTC,LastTimestamp:2026-01-14 23:52:09.632468853 +0000 UTC m=+411.082007407,Count:15,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-n-1d3be4f164,}" Jan 14 23:54:46.634752 kubelet[2898]: E0114 23:54:46.634676 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:54:47.843814 kubelet[2898]: E0114 23:54:47.843740 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:54:47Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:54:47Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:54:47Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:54:47Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"ci-4515-1-0-n-1d3be4f164\": Patch \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164/status?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:47.844221 kubelet[2898]: E0114 23:54:47.844007 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:47.844221 kubelet[2898]: E0114 23:54:47.844189 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:47.844437 kubelet[2898]: E0114 23:54:47.844392 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:47.844592 kubelet[2898]: E0114 23:54:47.844577 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:47.844592 kubelet[2898]: E0114 23:54:47.844592 2898 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Jan 14 23:54:48.632284 kubelet[2898]: I0114 23:54:48.631741 2898 status_manager.go:890] "Failed to get status for pod" podUID="a0fb221571f90a6b03ac373000837dfe" pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:48.632284 kubelet[2898]: I0114 23:54:48.632054 2898 status_manager.go:890] "Failed to get status for pod" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" pod="calico-system/csi-node-driver-2lqxs" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-2lqxs\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:48.632453 kubelet[2898]: I0114 23:54:48.632341 2898 status_manager.go:890] "Failed to get status for pod" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-2glxx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:48.632639 kubelet[2898]: I0114 23:54:48.632590 2898 status_manager.go:890] "Failed to get status for pod" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-7cd9b5689c-544p6\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:48.632880 kubelet[2898]: I0114 23:54:48.632851 2898 status_manager.go:890] "Failed to get status for pod" podUID="2600c830ca674ed87b79b96ba000ed32" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:48.633083 kubelet[2898]: I0114 23:54:48.633050 2898 status_manager.go:890] "Failed to get status for pod" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-7dcd859c48-hg526\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:48.633308 kubelet[2898]: I0114 23:54:48.633282 2898 status_manager.go:890] "Failed to get status for pod" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" pod="calico-system/goldmane-666569f655-5sxpk" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/goldmane-666569f655-5sxpk\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:48.634104 kubelet[2898]: I0114 23:54:48.633715 2898 status_manager.go:890] "Failed to get status for pod" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" pod="calico-system/whisker-7f94899ccb-pnwbr" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/whisker-7f94899ccb-pnwbr\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:48.634104 kubelet[2898]: I0114 23:54:48.633956 2898 status_manager.go:890] "Failed to get status for pod" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-49kdx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:48.634230 kubelet[2898]: I0114 23:54:48.634161 2898 status_manager.go:890] "Failed to get status for pod" podUID="0b87770b8d26d1b1663c3229f1382cec" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:54:50.632426 kubelet[2898]: E0114 23:54:50.632374 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:54:51.489406 kubelet[2898]: E0114 23:54:51.489356 2898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" interval="7s" Jan 14 23:54:53.632292 kubelet[2898]: E0114 23:54:53.632238 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:54:54.632059 kubelet[2898]: I0114 23:54:54.631797 2898 scope.go:117] "RemoveContainer" containerID="1d56440cf56cfb480fa81aedb9d10219e3a3cbab9d8171877eb78e395a06d5cc" Jan 14 23:54:54.633212 kubelet[2898]: E0114 23:54:54.633161 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:54:54.633812 kubelet[2898]: E0114 23:54:54.633509 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:54:54.634801 containerd[1695]: time="2026-01-14T23:54:54.634517262Z" level=info msg="CreateContainer within sandbox \"73955e808bb581e4bdc7aa2d5959ccdb604f9ee318c1f32b7c04db31e6ab18b5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:3,}" Jan 14 23:54:54.648515 containerd[1695]: time="2026-01-14T23:54:54.647562701Z" level=info msg="Container 7e6c821706f01847c217f9092127af1272a04f470ce9bd282d7d30ed5f36fa75: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:54:54.658518 containerd[1695]: time="2026-01-14T23:54:54.658472295Z" level=info msg="CreateContainer within sandbox \"73955e808bb581e4bdc7aa2d5959ccdb604f9ee318c1f32b7c04db31e6ab18b5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:3,} returns container id \"7e6c821706f01847c217f9092127af1272a04f470ce9bd282d7d30ed5f36fa75\"" Jan 14 23:54:54.658982 containerd[1695]: time="2026-01-14T23:54:54.658950736Z" level=info msg="StartContainer for \"7e6c821706f01847c217f9092127af1272a04f470ce9bd282d7d30ed5f36fa75\"" Jan 14 23:54:54.660118 containerd[1695]: time="2026-01-14T23:54:54.660069380Z" level=info msg="connecting to shim 7e6c821706f01847c217f9092127af1272a04f470ce9bd282d7d30ed5f36fa75" address="unix:///run/containerd/s/7e84ac729b0b3ea71e7a94c01c77c659ed3788ac652f51d62699bc5cf53d0528" protocol=ttrpc version=3 Jan 14 23:54:54.680481 systemd[1]: Started cri-containerd-7e6c821706f01847c217f9092127af1272a04f470ce9bd282d7d30ed5f36fa75.scope - libcontainer container 7e6c821706f01847c217f9092127af1272a04f470ce9bd282d7d30ed5f36fa75. Jan 14 23:54:54.691000 audit: BPF prog-id=299 op=LOAD Jan 14 23:54:54.692000 audit: BPF prog-id=300 op=LOAD Jan 14 23:54:54.693761 kernel: audit: type=1334 audit(1768434894.691:835): prog-id=299 op=LOAD Jan 14 23:54:54.693878 kernel: audit: type=1334 audit(1768434894.692:836): prog-id=300 op=LOAD Jan 14 23:54:54.693900 kernel: audit: type=1300 audit(1768434894.692:836): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2573 pid=6028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:54:54.692000 audit[6028]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2573 pid=6028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:54:54.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765366338323137303666303138343763323137663930393231323761 Jan 14 23:54:54.700835 kernel: audit: type=1327 audit(1768434894.692:836): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765366338323137303666303138343763323137663930393231323761 Jan 14 23:54:54.700921 kernel: audit: type=1334 audit(1768434894.692:837): prog-id=300 op=UNLOAD Jan 14 23:54:54.692000 audit: BPF prog-id=300 op=UNLOAD Jan 14 23:54:54.692000 audit[6028]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=6028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:54:54.705209 kernel: audit: type=1300 audit(1768434894.692:837): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=6028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:54:54.705336 kernel: audit: type=1327 audit(1768434894.692:837): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765366338323137303666303138343763323137663930393231323761 Jan 14 23:54:54.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765366338323137303666303138343763323137663930393231323761 Jan 14 23:54:54.692000 audit: BPF prog-id=301 op=LOAD Jan 14 23:54:54.713121 kernel: audit: type=1334 audit(1768434894.692:838): prog-id=301 op=LOAD Jan 14 23:54:54.713184 kernel: audit: type=1300 audit(1768434894.692:838): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2573 pid=6028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:54:54.713205 kernel: audit: type=1327 audit(1768434894.692:838): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765366338323137303666303138343763323137663930393231323761 Jan 14 23:54:54.692000 audit[6028]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2573 pid=6028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:54:54.692000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765366338323137303666303138343763323137663930393231323761 Jan 14 23:54:54.693000 audit: BPF prog-id=302 op=LOAD Jan 14 23:54:54.693000 audit[6028]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2573 pid=6028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:54:54.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765366338323137303666303138343763323137663930393231323761 Jan 14 23:54:54.693000 audit: BPF prog-id=302 op=UNLOAD Jan 14 23:54:54.693000 audit[6028]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=6028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:54:54.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765366338323137303666303138343763323137663930393231323761 Jan 14 23:54:54.693000 audit: BPF prog-id=301 op=UNLOAD Jan 14 23:54:54.693000 audit[6028]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=6028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:54:54.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765366338323137303666303138343763323137663930393231323761 Jan 14 23:54:54.693000 audit: BPF prog-id=303 op=LOAD Jan 14 23:54:54.693000 audit[6028]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2573 pid=6028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:54:54.693000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3765366338323137303666303138343763323137663930393231323761 Jan 14 23:54:54.733547 containerd[1695]: time="2026-01-14T23:54:54.733499364Z" level=info msg="StartContainer for \"7e6c821706f01847c217f9092127af1272a04f470ce9bd282d7d30ed5f36fa75\" returns successfully" Jan 14 23:54:56.632818 kubelet[2898]: E0114 23:54:56.632768 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:54:59.632592 kubelet[2898]: E0114 23:54:59.632541 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:55:04.633326 kubelet[2898]: E0114 23:55:04.633062 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:55:04.906395 kubelet[2898]: I0114 23:55:04.906167 2898 status_manager.go:890] "Failed to get status for pod" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" pod="calico-system/goldmane-666569f655-5sxpk" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/goldmane-666569f655-5sxpk\": net/http: TLS handshake timeout" Jan 14 23:55:05.157326 kubelet[2898]: E0114 23:55:05.157012 2898 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/events/goldmane-666569f655-5sxpk.188abdce32f1f5c3\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{goldmane-666569f655-5sxpk.188abdce32f1f5c3 calico-system 1958 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:goldmane-666569f655-5sxpk,UID:fcec49c5-6358-46d9-9922-8a81fb4bafd8,APIVersion:v1,ResourceVersion:1054,FieldPath:spec.containers{goldmane},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4515-1-0-n-1d3be4f164,},FirstTimestamp:2026-01-14 23:48:17 +0000 UTC,LastTimestamp:2026-01-14 23:52:09.632468853 +0000 UTC m=+411.082007407,Count:15,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-n-1d3be4f164,}" Jan 14 23:55:05.632724 kubelet[2898]: E0114 23:55:05.632677 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:55:05.633102 kubelet[2898]: E0114 23:55:05.633070 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:55:07.632716 kubelet[2898]: E0114 23:55:07.632654 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:55:08.004161 kubelet[2898]: E0114 23:55:08.004099 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:54:58Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:54:58Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:54:58Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:54:58Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"ci-4515-1-0-n-1d3be4f164\": Patch \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164/status?timeout=10s\": context deadline exceeded" Jan 14 23:55:08.491819 kubelet[2898]: E0114 23:55:08.491617 2898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": context deadline exceeded" interval="7s" Jan 14 23:55:11.632312 kubelet[2898]: E0114 23:55:11.632242 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:55:14.632984 kubelet[2898]: E0114 23:55:14.632925 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:55:14.908261 kubelet[2898]: I0114 23:55:14.907663 2898 status_manager.go:890] "Failed to get status for pod" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" pod="calico-system/whisker-7f94899ccb-pnwbr" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/whisker-7f94899ccb-pnwbr\": net/http: TLS handshake timeout" Jan 14 23:55:16.079627 kubelet[2898]: E0114 23:55:16.079061 2898 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/events/goldmane-666569f655-5sxpk.188abdce32f1f5c3\": read tcp 10.0.22.230:57044->10.0.22.230:6443: read: connection reset by peer" event="&Event{ObjectMeta:{goldmane-666569f655-5sxpk.188abdce32f1f5c3 calico-system 1958 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:goldmane-666569f655-5sxpk,UID:fcec49c5-6358-46d9-9922-8a81fb4bafd8,APIVersion:v1,ResourceVersion:1054,FieldPath:spec.containers{goldmane},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4515-1-0-n-1d3be4f164,},FirstTimestamp:2026-01-14 23:48:17 +0000 UTC,LastTimestamp:2026-01-14 23:52:09.632468853 +0000 UTC m=+411.082007407,Count:15,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-n-1d3be4f164,}" Jan 14 23:55:16.079671 systemd[1]: cri-containerd-7e6c821706f01847c217f9092127af1272a04f470ce9bd282d7d30ed5f36fa75.scope: Deactivated successfully. Jan 14 23:55:16.080310 systemd[1]: cri-containerd-7e6c821706f01847c217f9092127af1272a04f470ce9bd282d7d30ed5f36fa75.scope: Consumed 1.442s CPU time, 22.5M memory peak. Jan 14 23:55:16.081350 containerd[1695]: time="2026-01-14T23:55:16.081016418Z" level=info msg="received container exit event container_id:\"7e6c821706f01847c217f9092127af1272a04f470ce9bd282d7d30ed5f36fa75\" id:\"7e6c821706f01847c217f9092127af1272a04f470ce9bd282d7d30ed5f36fa75\" pid:6042 exit_status:255 exited_at:{seconds:1768434916 nanos:80703057}" Jan 14 23:55:16.102083 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-7e6c821706f01847c217f9092127af1272a04f470ce9bd282d7d30ed5f36fa75-rootfs.mount: Deactivated successfully. Jan 14 23:55:16.950885 kubelet[2898]: I0114 23:55:16.950764 2898 scope.go:117] "RemoveContainer" containerID="1d56440cf56cfb480fa81aedb9d10219e3a3cbab9d8171877eb78e395a06d5cc" Jan 14 23:55:16.951093 kubelet[2898]: I0114 23:55:16.951064 2898 scope.go:117] "RemoveContainer" containerID="7e6c821706f01847c217f9092127af1272a04f470ce9bd282d7d30ed5f36fa75" Jan 14 23:55:16.951236 kubelet[2898]: E0114 23:55:16.951209 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ci-4515-1-0-n-1d3be4f164_kube-system(0b87770b8d26d1b1663c3229f1382cec)\"" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" podUID="0b87770b8d26d1b1663c3229f1382cec" Jan 14 23:55:16.952644 containerd[1695]: time="2026-01-14T23:55:16.952592802Z" level=info msg="RemoveContainer for \"1d56440cf56cfb480fa81aedb9d10219e3a3cbab9d8171877eb78e395a06d5cc\"" Jan 14 23:55:16.957469 containerd[1695]: time="2026-01-14T23:55:16.957414936Z" level=info msg="RemoveContainer for \"1d56440cf56cfb480fa81aedb9d10219e3a3cbab9d8171877eb78e395a06d5cc\" returns successfully" Jan 14 23:55:17.081124 kubelet[2898]: E0114 23:55:17.080982 2898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused - error from a previous attempt: read tcp 10.0.22.230:57050->10.0.22.230:6443: read: connection reset by peer" interval="7s" Jan 14 23:55:17.082201 kubelet[2898]: E0114 23:55:17.081258 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused - error from a previous attempt: read tcp 10.0.22.230:46496->10.0.22.230:6443: read: connection reset by peer" Jan 14 23:55:17.082201 kubelet[2898]: I0114 23:55:17.081491 2898 status_manager.go:890] "Failed to get status for pod" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-49kdx\": dial tcp 10.0.22.230:6443: connect: connection refused - error from a previous attempt: read tcp 10.0.22.230:57034->10.0.22.230:6443: read: connection reset by peer" Jan 14 23:55:17.082201 kubelet[2898]: E0114 23:55:17.081505 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:17.082201 kubelet[2898]: I0114 23:55:17.081749 2898 status_manager.go:890] "Failed to get status for pod" podUID="0b87770b8d26d1b1663c3229f1382cec" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:17.082201 kubelet[2898]: E0114 23:55:17.081922 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:17.082201 kubelet[2898]: I0114 23:55:17.081941 2898 status_manager.go:890] "Failed to get status for pod" podUID="a0fb221571f90a6b03ac373000837dfe" pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:17.082201 kubelet[2898]: E0114 23:55:17.082095 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:17.082201 kubelet[2898]: I0114 23:55:17.082097 2898 status_manager.go:890] "Failed to get status for pod" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" pod="calico-system/csi-node-driver-2lqxs" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-2lqxs\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:17.082201 kubelet[2898]: E0114 23:55:17.082109 2898 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Jan 14 23:55:17.084143 kubelet[2898]: I0114 23:55:17.082243 2898 status_manager.go:890] "Failed to get status for pod" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-2glxx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:17.085494 kubelet[2898]: I0114 23:55:17.085442 2898 status_manager.go:890] "Failed to get status for pod" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-7cd9b5689c-544p6\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:17.085827 kubelet[2898]: I0114 23:55:17.085800 2898 status_manager.go:890] "Failed to get status for pod" podUID="2600c830ca674ed87b79b96ba000ed32" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:17.086147 kubelet[2898]: I0114 23:55:17.086103 2898 status_manager.go:890] "Failed to get status for pod" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-7dcd859c48-hg526\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:17.086465 kubelet[2898]: I0114 23:55:17.086442 2898 status_manager.go:890] "Failed to get status for pod" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" pod="calico-system/goldmane-666569f655-5sxpk" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/goldmane-666569f655-5sxpk\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:17.086683 kubelet[2898]: I0114 23:55:17.086663 2898 status_manager.go:890] "Failed to get status for pod" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" pod="calico-system/whisker-7f94899ccb-pnwbr" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/whisker-7f94899ccb-pnwbr\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:17.086880 kubelet[2898]: I0114 23:55:17.086857 2898 status_manager.go:890] "Failed to get status for pod" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-49kdx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:17.087075 kubelet[2898]: I0114 23:55:17.087053 2898 status_manager.go:890] "Failed to get status for pod" podUID="0b87770b8d26d1b1663c3229f1382cec" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:17.087283 kubelet[2898]: I0114 23:55:17.087248 2898 status_manager.go:890] "Failed to get status for pod" podUID="a0fb221571f90a6b03ac373000837dfe" pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:17.087488 kubelet[2898]: I0114 23:55:17.087468 2898 status_manager.go:890] "Failed to get status for pod" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" pod="calico-system/csi-node-driver-2lqxs" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-2lqxs\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:17.087712 kubelet[2898]: I0114 23:55:17.087690 2898 status_manager.go:890] "Failed to get status for pod" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-7cd9b5689c-544p6\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:17.087921 kubelet[2898]: I0114 23:55:17.087901 2898 status_manager.go:890] "Failed to get status for pod" podUID="2600c830ca674ed87b79b96ba000ed32" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:17.088119 kubelet[2898]: I0114 23:55:17.088100 2898 status_manager.go:890] "Failed to get status for pod" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-7dcd859c48-hg526\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:17.088323 kubelet[2898]: I0114 23:55:17.088300 2898 status_manager.go:890] "Failed to get status for pod" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-2glxx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:17.088857 kubelet[2898]: I0114 23:55:17.088828 2898 status_manager.go:890] "Failed to get status for pod" podUID="a0fb221571f90a6b03ac373000837dfe" pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:17.089096 kubelet[2898]: I0114 23:55:17.089075 2898 status_manager.go:890] "Failed to get status for pod" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" pod="calico-system/csi-node-driver-2lqxs" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-2lqxs\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:17.089325 kubelet[2898]: I0114 23:55:17.089300 2898 status_manager.go:890] "Failed to get status for pod" podUID="2600c830ca674ed87b79b96ba000ed32" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:17.089525 kubelet[2898]: I0114 23:55:17.089502 2898 status_manager.go:890] "Failed to get status for pod" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-7dcd859c48-hg526\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:17.089727 kubelet[2898]: I0114 23:55:17.089703 2898 status_manager.go:890] "Failed to get status for pod" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-2glxx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:17.089920 kubelet[2898]: I0114 23:55:17.089898 2898 status_manager.go:890] "Failed to get status for pod" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-7cd9b5689c-544p6\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:17.090123 kubelet[2898]: I0114 23:55:17.090100 2898 status_manager.go:890] "Failed to get status for pod" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" pod="calico-system/goldmane-666569f655-5sxpk" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/goldmane-666569f655-5sxpk\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:17.090381 kubelet[2898]: I0114 23:55:17.090342 2898 status_manager.go:890] "Failed to get status for pod" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" pod="calico-system/whisker-7f94899ccb-pnwbr" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/whisker-7f94899ccb-pnwbr\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:17.090622 kubelet[2898]: I0114 23:55:17.090584 2898 status_manager.go:890] "Failed to get status for pod" podUID="0b87770b8d26d1b1663c3229f1382cec" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:17.090846 kubelet[2898]: I0114 23:55:17.090812 2898 status_manager.go:890] "Failed to get status for pod" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-49kdx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:17.565596 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 14 23:55:17.565718 kernel: audit: type=1334 audit(1768434917.564:843): prog-id=299 op=UNLOAD Jan 14 23:55:17.564000 audit: BPF prog-id=299 op=UNLOAD Jan 14 23:55:17.564000 audit: BPF prog-id=303 op=UNLOAD Jan 14 23:55:17.567303 kernel: audit: type=1334 audit(1768434917.564:844): prog-id=303 op=UNLOAD Jan 14 23:55:17.632526 kubelet[2898]: E0114 23:55:17.632458 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:55:18.632286 kubelet[2898]: I0114 23:55:18.632159 2898 status_manager.go:890] "Failed to get status for pod" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" pod="calico-system/whisker-7f94899ccb-pnwbr" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/whisker-7f94899ccb-pnwbr\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:18.633334 kubelet[2898]: I0114 23:55:18.632647 2898 status_manager.go:890] "Failed to get status for pod" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" pod="calico-system/goldmane-666569f655-5sxpk" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/goldmane-666569f655-5sxpk\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:18.633334 kubelet[2898]: I0114 23:55:18.632940 2898 status_manager.go:890] "Failed to get status for pod" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-49kdx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:18.633334 kubelet[2898]: I0114 23:55:18.633146 2898 status_manager.go:890] "Failed to get status for pod" podUID="0b87770b8d26d1b1663c3229f1382cec" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:18.633477 kubelet[2898]: I0114 23:55:18.633347 2898 status_manager.go:890] "Failed to get status for pod" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" pod="calico-system/csi-node-driver-2lqxs" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-2lqxs\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:18.633619 kubelet[2898]: I0114 23:55:18.633593 2898 status_manager.go:890] "Failed to get status for pod" podUID="a0fb221571f90a6b03ac373000837dfe" pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:18.634256 kubelet[2898]: I0114 23:55:18.634223 2898 status_manager.go:890] "Failed to get status for pod" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-7dcd859c48-hg526\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:18.634667 kubelet[2898]: I0114 23:55:18.634633 2898 status_manager.go:890] "Failed to get status for pod" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-2glxx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:18.634963 kubelet[2898]: I0114 23:55:18.634935 2898 status_manager.go:890] "Failed to get status for pod" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-7cd9b5689c-544p6\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:18.636058 kubelet[2898]: I0114 23:55:18.636026 2898 status_manager.go:890] "Failed to get status for pod" podUID="2600c830ca674ed87b79b96ba000ed32" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:18.636905 kubelet[2898]: E0114 23:55:18.636520 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:55:19.633530 kubelet[2898]: E0114 23:55:19.633480 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:55:19.887388 kubelet[2898]: I0114 23:55:19.887168 2898 scope.go:117] "RemoveContainer" containerID="7e6c821706f01847c217f9092127af1272a04f470ce9bd282d7d30ed5f36fa75" Jan 14 23:55:19.887388 kubelet[2898]: E0114 23:55:19.887375 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ci-4515-1-0-n-1d3be4f164_kube-system(0b87770b8d26d1b1663c3229f1382cec)\"" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" podUID="0b87770b8d26d1b1663c3229f1382cec" Jan 14 23:55:19.888071 kubelet[2898]: I0114 23:55:19.887827 2898 status_manager.go:890] "Failed to get status for pod" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" pod="calico-system/whisker-7f94899ccb-pnwbr" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/whisker-7f94899ccb-pnwbr\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:19.888151 kubelet[2898]: I0114 23:55:19.888116 2898 status_manager.go:890] "Failed to get status for pod" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" pod="calico-system/goldmane-666569f655-5sxpk" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/goldmane-666569f655-5sxpk\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:19.888413 kubelet[2898]: I0114 23:55:19.888385 2898 status_manager.go:890] "Failed to get status for pod" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-49kdx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:19.888670 kubelet[2898]: I0114 23:55:19.888606 2898 status_manager.go:890] "Failed to get status for pod" podUID="0b87770b8d26d1b1663c3229f1382cec" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:19.888876 kubelet[2898]: I0114 23:55:19.888854 2898 status_manager.go:890] "Failed to get status for pod" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" pod="calico-system/csi-node-driver-2lqxs" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-2lqxs\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:19.889120 kubelet[2898]: I0114 23:55:19.889095 2898 status_manager.go:890] "Failed to get status for pod" podUID="a0fb221571f90a6b03ac373000837dfe" pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:19.889352 kubelet[2898]: I0114 23:55:19.889329 2898 status_manager.go:890] "Failed to get status for pod" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-7dcd859c48-hg526\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:19.889546 kubelet[2898]: I0114 23:55:19.889522 2898 status_manager.go:890] "Failed to get status for pod" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-2glxx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:19.889714 kubelet[2898]: I0114 23:55:19.889695 2898 status_manager.go:890] "Failed to get status for pod" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-7cd9b5689c-544p6\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:19.889953 kubelet[2898]: I0114 23:55:19.889918 2898 status_manager.go:890] "Failed to get status for pod" podUID="2600c830ca674ed87b79b96ba000ed32" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:21.895932 kubelet[2898]: I0114 23:55:21.895892 2898 scope.go:117] "RemoveContainer" containerID="7e6c821706f01847c217f9092127af1272a04f470ce9bd282d7d30ed5f36fa75" Jan 14 23:55:21.896309 kubelet[2898]: E0114 23:55:21.896055 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ci-4515-1-0-n-1d3be4f164_kube-system(0b87770b8d26d1b1663c3229f1382cec)\"" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" podUID="0b87770b8d26d1b1663c3229f1382cec" Jan 14 23:55:22.633775 kubelet[2898]: E0114 23:55:22.633698 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:55:23.632413 kubelet[2898]: E0114 23:55:23.632342 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:55:24.081719 kubelet[2898]: E0114 23:55:24.081669 2898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" interval="7s" Jan 14 23:55:26.080962 kubelet[2898]: E0114 23:55:26.080800 2898 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/events/goldmane-666569f655-5sxpk.188abdce32f1f5c3\": dial tcp 10.0.22.230:6443: connect: connection refused" event="&Event{ObjectMeta:{goldmane-666569f655-5sxpk.188abdce32f1f5c3 calico-system 1958 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:goldmane-666569f655-5sxpk,UID:fcec49c5-6358-46d9-9922-8a81fb4bafd8,APIVersion:v1,ResourceVersion:1054,FieldPath:spec.containers{goldmane},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4515-1-0-n-1d3be4f164,},FirstTimestamp:2026-01-14 23:48:17 +0000 UTC,LastTimestamp:2026-01-14 23:52:09.632468853 +0000 UTC m=+411.082007407,Count:15,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-n-1d3be4f164,}" Jan 14 23:55:27.317017 kubelet[2898]: E0114 23:55:27.316979 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:55:27Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:55:27Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:55:27Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:55:27Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"ci-4515-1-0-n-1d3be4f164\": Patch \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164/status?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:27.317738 kubelet[2898]: E0114 23:55:27.317401 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:27.317738 kubelet[2898]: E0114 23:55:27.317640 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:27.318113 kubelet[2898]: E0114 23:55:27.317879 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:27.318113 kubelet[2898]: E0114 23:55:27.318077 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:27.318113 kubelet[2898]: E0114 23:55:27.318092 2898 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Jan 14 23:55:28.632298 kubelet[2898]: I0114 23:55:28.632208 2898 status_manager.go:890] "Failed to get status for pod" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-49kdx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:28.632668 kubelet[2898]: I0114 23:55:28.632586 2898 status_manager.go:890] "Failed to get status for pod" podUID="0b87770b8d26d1b1663c3229f1382cec" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:28.632938 kubelet[2898]: I0114 23:55:28.632905 2898 status_manager.go:890] "Failed to get status for pod" podUID="a0fb221571f90a6b03ac373000837dfe" pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:28.633366 kubelet[2898]: I0114 23:55:28.633163 2898 status_manager.go:890] "Failed to get status for pod" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" pod="calico-system/csi-node-driver-2lqxs" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-2lqxs\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:28.633549 kubelet[2898]: I0114 23:55:28.633518 2898 status_manager.go:890] "Failed to get status for pod" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-7cd9b5689c-544p6\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:28.633846 kubelet[2898]: I0114 23:55:28.633823 2898 status_manager.go:890] "Failed to get status for pod" podUID="2600c830ca674ed87b79b96ba000ed32" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:28.634176 kubelet[2898]: I0114 23:55:28.634151 2898 status_manager.go:890] "Failed to get status for pod" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-7dcd859c48-hg526\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:28.634391 kubelet[2898]: I0114 23:55:28.634368 2898 status_manager.go:890] "Failed to get status for pod" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-2glxx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:28.634824 kubelet[2898]: I0114 23:55:28.634583 2898 status_manager.go:890] "Failed to get status for pod" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" pod="calico-system/goldmane-666569f655-5sxpk" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/goldmane-666569f655-5sxpk\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:28.634824 kubelet[2898]: I0114 23:55:28.634792 2898 status_manager.go:890] "Failed to get status for pod" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" pod="calico-system/whisker-7f94899ccb-pnwbr" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/whisker-7f94899ccb-pnwbr\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:29.633004 kubelet[2898]: E0114 23:55:29.632954 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:55:30.631982 kubelet[2898]: E0114 23:55:30.631923 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:55:31.082834 kubelet[2898]: E0114 23:55:31.082787 2898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" interval="7s" Jan 14 23:55:33.633102 kubelet[2898]: E0114 23:55:33.633049 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:55:33.633507 kubelet[2898]: E0114 23:55:33.633403 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:55:35.631828 kubelet[2898]: I0114 23:55:35.631644 2898 scope.go:117] "RemoveContainer" containerID="7e6c821706f01847c217f9092127af1272a04f470ce9bd282d7d30ed5f36fa75" Jan 14 23:55:35.631828 kubelet[2898]: E0114 23:55:35.631791 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver pod=kube-apiserver-ci-4515-1-0-n-1d3be4f164_kube-system(0b87770b8d26d1b1663c3229f1382cec)\"" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" podUID="0b87770b8d26d1b1663c3229f1382cec" Jan 14 23:55:36.082585 kubelet[2898]: E0114 23:55:36.082376 2898 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/events/goldmane-666569f655-5sxpk.188abdce32f1f5c3\": dial tcp 10.0.22.230:6443: connect: connection refused" event="&Event{ObjectMeta:{goldmane-666569f655-5sxpk.188abdce32f1f5c3 calico-system 1958 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:goldmane-666569f655-5sxpk,UID:fcec49c5-6358-46d9-9922-8a81fb4bafd8,APIVersion:v1,ResourceVersion:1054,FieldPath:spec.containers{goldmane},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4515-1-0-n-1d3be4f164,},FirstTimestamp:2026-01-14 23:48:17 +0000 UTC,LastTimestamp:2026-01-14 23:52:09.632468853 +0000 UTC m=+411.082007407,Count:15,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-n-1d3be4f164,}" Jan 14 23:55:36.632418 kubelet[2898]: E0114 23:55:36.632346 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:55:36.633213 kubelet[2898]: E0114 23:55:36.633140 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:55:37.653359 kubelet[2898]: E0114 23:55:37.653320 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:55:37Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:55:37Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:55:37Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:55:37Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"ci-4515-1-0-n-1d3be4f164\": Patch \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164/status?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:37.653720 kubelet[2898]: E0114 23:55:37.653594 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:37.653853 kubelet[2898]: E0114 23:55:37.653815 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:37.654035 kubelet[2898]: E0114 23:55:37.654015 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:37.654197 kubelet[2898]: E0114 23:55:37.654177 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:37.654197 kubelet[2898]: E0114 23:55:37.654195 2898 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Jan 14 23:55:38.084095 kubelet[2898]: E0114 23:55:38.084041 2898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" interval="7s" Jan 14 23:55:38.632596 kubelet[2898]: I0114 23:55:38.632516 2898 status_manager.go:890] "Failed to get status for pod" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-2glxx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:38.633073 kubelet[2898]: I0114 23:55:38.633041 2898 status_manager.go:890] "Failed to get status for pod" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-7cd9b5689c-544p6\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:38.633248 kubelet[2898]: I0114 23:55:38.633220 2898 status_manager.go:890] "Failed to get status for pod" podUID="2600c830ca674ed87b79b96ba000ed32" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:38.633467 kubelet[2898]: I0114 23:55:38.633441 2898 status_manager.go:890] "Failed to get status for pod" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-7dcd859c48-hg526\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:38.633674 kubelet[2898]: I0114 23:55:38.633654 2898 status_manager.go:890] "Failed to get status for pod" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" pod="calico-system/goldmane-666569f655-5sxpk" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/goldmane-666569f655-5sxpk\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:38.633940 kubelet[2898]: I0114 23:55:38.633904 2898 status_manager.go:890] "Failed to get status for pod" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" pod="calico-system/whisker-7f94899ccb-pnwbr" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/whisker-7f94899ccb-pnwbr\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:38.634282 kubelet[2898]: I0114 23:55:38.634218 2898 status_manager.go:890] "Failed to get status for pod" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-49kdx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:38.634613 kubelet[2898]: I0114 23:55:38.634537 2898 status_manager.go:890] "Failed to get status for pod" podUID="0b87770b8d26d1b1663c3229f1382cec" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:38.634755 kubelet[2898]: I0114 23:55:38.634732 2898 status_manager.go:890] "Failed to get status for pod" podUID="a0fb221571f90a6b03ac373000837dfe" pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:38.634986 kubelet[2898]: I0114 23:55:38.634950 2898 status_manager.go:890] "Failed to get status for pod" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" pod="calico-system/csi-node-driver-2lqxs" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-2lqxs\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:55:42.633167 kubelet[2898]: E0114 23:55:42.633116 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:55:42.633540 kubelet[2898]: E0114 23:55:42.633251 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:55:45.085862 kubelet[2898]: E0114 23:55:45.085550 2898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" interval="7s" Jan 14 23:55:46.083941 kubelet[2898]: E0114 23:55:46.083718 2898 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/events/goldmane-666569f655-5sxpk.188abdce32f1f5c3\": dial tcp 10.0.22.230:6443: connect: connection refused" event="&Event{ObjectMeta:{goldmane-666569f655-5sxpk.188abdce32f1f5c3 calico-system 1958 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:goldmane-666569f655-5sxpk,UID:fcec49c5-6358-46d9-9922-8a81fb4bafd8,APIVersion:v1,ResourceVersion:1054,FieldPath:spec.containers{goldmane},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4515-1-0-n-1d3be4f164,},FirstTimestamp:2026-01-14 23:48:17 +0000 UTC,LastTimestamp:2026-01-14 23:52:09.632468853 +0000 UTC m=+411.082007407,Count:15,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-n-1d3be4f164,}" Jan 14 23:55:46.633081 kubelet[2898]: E0114 23:55:46.633023 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:55:46.633712 kubelet[2898]: E0114 23:55:46.633221 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:55:47.632087 kubelet[2898]: I0114 23:55:47.632044 2898 scope.go:117] "RemoveContainer" containerID="7e6c821706f01847c217f9092127af1272a04f470ce9bd282d7d30ed5f36fa75" Jan 14 23:55:47.634093 containerd[1695]: time="2026-01-14T23:55:47.633991935Z" level=info msg="CreateContainer within sandbox \"73955e808bb581e4bdc7aa2d5959ccdb604f9ee318c1f32b7c04db31e6ab18b5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:4,}" Jan 14 23:55:47.644503 containerd[1695]: time="2026-01-14T23:55:47.644452367Z" level=info msg="Container 0d63deb693f9c4fa3e371c9cd56c33858bf3ecfa818ae551577f6b5eed2b3d52: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:55:47.652551 containerd[1695]: time="2026-01-14T23:55:47.652496872Z" level=info msg="CreateContainer within sandbox \"73955e808bb581e4bdc7aa2d5959ccdb604f9ee318c1f32b7c04db31e6ab18b5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:4,} returns container id \"0d63deb693f9c4fa3e371c9cd56c33858bf3ecfa818ae551577f6b5eed2b3d52\"" Jan 14 23:55:47.653081 containerd[1695]: time="2026-01-14T23:55:47.653059434Z" level=info msg="StartContainer for \"0d63deb693f9c4fa3e371c9cd56c33858bf3ecfa818ae551577f6b5eed2b3d52\"" Jan 14 23:55:47.654386 containerd[1695]: time="2026-01-14T23:55:47.654359318Z" level=info msg="connecting to shim 0d63deb693f9c4fa3e371c9cd56c33858bf3ecfa818ae551577f6b5eed2b3d52" address="unix:///run/containerd/s/7e84ac729b0b3ea71e7a94c01c77c659ed3788ac652f51d62699bc5cf53d0528" protocol=ttrpc version=3 Jan 14 23:55:47.687722 systemd[1]: Started cri-containerd-0d63deb693f9c4fa3e371c9cd56c33858bf3ecfa818ae551577f6b5eed2b3d52.scope - libcontainer container 0d63deb693f9c4fa3e371c9cd56c33858bf3ecfa818ae551577f6b5eed2b3d52. Jan 14 23:55:47.698000 audit: BPF prog-id=304 op=LOAD Jan 14 23:55:47.700298 kernel: audit: type=1334 audit(1768434947.698:845): prog-id=304 op=LOAD Jan 14 23:55:47.700000 audit: BPF prog-id=305 op=LOAD Jan 14 23:55:47.700000 audit[6145]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2573 pid=6145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:55:47.705407 kernel: audit: type=1334 audit(1768434947.700:846): prog-id=305 op=LOAD Jan 14 23:55:47.705505 kernel: audit: type=1300 audit(1768434947.700:846): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176180 a2=98 a3=0 items=0 ppid=2573 pid=6145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:55:47.705534 kernel: audit: type=1327 audit(1768434947.700:846): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064363364656236393366396334666133653337316339636435366333 Jan 14 23:55:47.700000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064363364656236393366396334666133653337316339636435366333 Jan 14 23:55:47.701000 audit: BPF prog-id=305 op=UNLOAD Jan 14 23:55:47.709641 kernel: audit: type=1334 audit(1768434947.701:847): prog-id=305 op=UNLOAD Jan 14 23:55:47.701000 audit[6145]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=6145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:55:47.713293 kernel: audit: type=1300 audit(1768434947.701:847): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=6145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:55:47.713357 kernel: audit: type=1327 audit(1768434947.701:847): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064363364656236393366396334666133653337316339636435366333 Jan 14 23:55:47.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064363364656236393366396334666133653337316339636435366333 Jan 14 23:55:47.701000 audit: BPF prog-id=306 op=LOAD Jan 14 23:55:47.701000 audit[6145]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2573 pid=6145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:55:47.720236 kernel: audit: type=1334 audit(1768434947.701:848): prog-id=306 op=LOAD Jan 14 23:55:47.720371 kernel: audit: type=1300 audit(1768434947.701:848): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001763e8 a2=98 a3=0 items=0 ppid=2573 pid=6145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:55:47.720471 kernel: audit: type=1327 audit(1768434947.701:848): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064363364656236393366396334666133653337316339636435366333 Jan 14 23:55:47.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064363364656236393366396334666133653337316339636435366333 Jan 14 23:55:47.701000 audit: BPF prog-id=307 op=LOAD Jan 14 23:55:47.701000 audit[6145]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000176168 a2=98 a3=0 items=0 ppid=2573 pid=6145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:55:47.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064363364656236393366396334666133653337316339636435366333 Jan 14 23:55:47.701000 audit: BPF prog-id=307 op=UNLOAD Jan 14 23:55:47.701000 audit[6145]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=6145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:55:47.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064363364656236393366396334666133653337316339636435366333 Jan 14 23:55:47.701000 audit: BPF prog-id=306 op=UNLOAD Jan 14 23:55:47.701000 audit[6145]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=6145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:55:47.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064363364656236393366396334666133653337316339636435366333 Jan 14 23:55:47.701000 audit: BPF prog-id=308 op=LOAD Jan 14 23:55:47.701000 audit[6145]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000176648 a2=98 a3=0 items=0 ppid=2573 pid=6145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:55:47.701000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3064363364656236393366396334666133653337316339636435366333 Jan 14 23:55:47.741592 containerd[1695]: time="2026-01-14T23:55:47.741543230Z" level=info msg="StartContainer for \"0d63deb693f9c4fa3e371c9cd56c33858bf3ecfa818ae551577f6b5eed2b3d52\" returns successfully" Jan 14 23:55:49.632924 kubelet[2898]: E0114 23:55:49.632857 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:55:49.633356 kubelet[2898]: E0114 23:55:49.633204 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:55:54.632509 kubelet[2898]: E0114 23:55:54.632029 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:55:56.632903 kubelet[2898]: E0114 23:55:56.632857 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:55:57.632132 kubelet[2898]: E0114 23:55:57.632070 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:55:58.014586 kubelet[2898]: E0114 23:55:58.014453 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:55:48Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:55:48Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:55:48Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:55:48Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"ci-4515-1-0-n-1d3be4f164\": Patch \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164/status?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 14 23:55:58.018380 kubelet[2898]: I0114 23:55:58.018225 2898 status_manager.go:890] "Failed to get status for pod" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-49kdx\": net/http: TLS handshake timeout" Jan 14 23:55:58.634897 kubelet[2898]: E0114 23:55:58.634849 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:56:00.634757 kubelet[2898]: E0114 23:56:00.634693 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:56:02.086052 kubelet[2898]: E0114 23:56:02.085932 2898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": context deadline exceeded" interval="7s" Jan 14 23:56:02.634132 kubelet[2898]: E0114 23:56:02.634064 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:56:06.085873 kubelet[2898]: E0114 23:56:06.085749 2898 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/events/goldmane-666569f655-5sxpk.188abdce32f1f5c3\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{goldmane-666569f655-5sxpk.188abdce32f1f5c3 calico-system 1958 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:goldmane-666569f655-5sxpk,UID:fcec49c5-6358-46d9-9922-8a81fb4bafd8,APIVersion:v1,ResourceVersion:1054,FieldPath:spec.containers{goldmane},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4515-1-0-n-1d3be4f164,},FirstTimestamp:2026-01-14 23:48:17 +0000 UTC,LastTimestamp:2026-01-14 23:52:09.632468853 +0000 UTC m=+411.082007407,Count:15,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-n-1d3be4f164,}" Jan 14 23:56:07.632193 kubelet[2898]: E0114 23:56:07.632142 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:56:08.015376 kubelet[2898]: E0114 23:56:08.015113 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 14 23:56:08.019943 kubelet[2898]: I0114 23:56:08.019894 2898 status_manager.go:890] "Failed to get status for pod" podUID="0b87770b8d26d1b1663c3229f1382cec" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-4515-1-0-n-1d3be4f164\": net/http: TLS handshake timeout" Jan 14 23:56:08.570547 systemd[1]: cri-containerd-0d63deb693f9c4fa3e371c9cd56c33858bf3ecfa818ae551577f6b5eed2b3d52.scope: Deactivated successfully. Jan 14 23:56:08.571751 containerd[1695]: time="2026-01-14T23:56:08.571205823Z" level=info msg="received container exit event container_id:\"0d63deb693f9c4fa3e371c9cd56c33858bf3ecfa818ae551577f6b5eed2b3d52\" id:\"0d63deb693f9c4fa3e371c9cd56c33858bf3ecfa818ae551577f6b5eed2b3d52\" pid:6158 exit_status:255 exited_at:{seconds:1768434968 nanos:570898702}" Jan 14 23:56:08.576792 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 14 23:56:08.576885 kernel: audit: type=1334 audit(1768434968.575:853): prog-id=304 op=UNLOAD Jan 14 23:56:08.575000 audit: BPF prog-id=304 op=UNLOAD Jan 14 23:56:08.575000 audit: BPF prog-id=308 op=UNLOAD Jan 14 23:56:08.577841 kernel: audit: type=1334 audit(1768434968.575:854): prog-id=308 op=UNLOAD Jan 14 23:56:08.592142 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-0d63deb693f9c4fa3e371c9cd56c33858bf3ecfa818ae551577f6b5eed2b3d52-rootfs.mount: Deactivated successfully. Jan 14 23:56:09.062009 kubelet[2898]: I0114 23:56:09.061948 2898 scope.go:117] "RemoveContainer" containerID="7e6c821706f01847c217f9092127af1272a04f470ce9bd282d7d30ed5f36fa75" Jan 14 23:56:09.062432 kubelet[2898]: I0114 23:56:09.062316 2898 scope.go:117] "RemoveContainer" containerID="0d63deb693f9c4fa3e371c9cd56c33858bf3ecfa818ae551577f6b5eed2b3d52" Jan 14 23:56:09.062514 kubelet[2898]: E0114 23:56:09.062476 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-ci-4515-1-0-n-1d3be4f164_kube-system(0b87770b8d26d1b1663c3229f1382cec)\"" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" podUID="0b87770b8d26d1b1663c3229f1382cec" Jan 14 23:56:09.063805 containerd[1695]: time="2026-01-14T23:56:09.063769051Z" level=info msg="RemoveContainer for \"7e6c821706f01847c217f9092127af1272a04f470ce9bd282d7d30ed5f36fa75\"" Jan 14 23:56:09.068792 containerd[1695]: time="2026-01-14T23:56:09.068710227Z" level=info msg="RemoveContainer for \"7e6c821706f01847c217f9092127af1272a04f470ce9bd282d7d30ed5f36fa75\" returns successfully" Jan 14 23:56:09.086860 kubelet[2898]: E0114 23:56:09.086796 2898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" interval="7s" Jan 14 23:56:09.570628 kubelet[2898]: E0114 23:56:09.570289 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused - error from a previous attempt: read tcp 10.0.22.230:55968->10.0.22.230:6443: read: connection reset by peer" Jan 14 23:56:09.570628 kubelet[2898]: I0114 23:56:09.570576 2898 status_manager.go:890] "Failed to get status for pod" podUID="a0fb221571f90a6b03ac373000837dfe" pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused - error from a previous attempt: read tcp 10.0.22.230:55982->10.0.22.230:6443: read: connection reset by peer" Jan 14 23:56:09.570793 kubelet[2898]: I0114 23:56:09.570761 2898 status_manager.go:890] "Failed to get status for pod" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" pod="calico-system/csi-node-driver-2lqxs" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-2lqxs\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:09.571449 kubelet[2898]: E0114 23:56:09.571262 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:09.572362 kubelet[2898]: I0114 23:56:09.572126 2898 status_manager.go:890] "Failed to get status for pod" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-2glxx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:09.572362 kubelet[2898]: E0114 23:56:09.572291 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:09.572362 kubelet[2898]: E0114 23:56:09.572309 2898 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Jan 14 23:56:09.572457 kubelet[2898]: I0114 23:56:09.572376 2898 status_manager.go:890] "Failed to get status for pod" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-7cd9b5689c-544p6\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:09.572868 kubelet[2898]: I0114 23:56:09.572817 2898 status_manager.go:890] "Failed to get status for pod" podUID="2600c830ca674ed87b79b96ba000ed32" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:09.573284 kubelet[2898]: I0114 23:56:09.573221 2898 status_manager.go:890] "Failed to get status for pod" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-7dcd859c48-hg526\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:09.573705 kubelet[2898]: I0114 23:56:09.573494 2898 status_manager.go:890] "Failed to get status for pod" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" pod="calico-system/goldmane-666569f655-5sxpk" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/goldmane-666569f655-5sxpk\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:09.573857 kubelet[2898]: I0114 23:56:09.573830 2898 status_manager.go:890] "Failed to get status for pod" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" pod="calico-system/whisker-7f94899ccb-pnwbr" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/whisker-7f94899ccb-pnwbr\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:09.574395 kubelet[2898]: I0114 23:56:09.574354 2898 status_manager.go:890] "Failed to get status for pod" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-2glxx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:09.575288 kubelet[2898]: I0114 23:56:09.574637 2898 status_manager.go:890] "Failed to get status for pod" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-7cd9b5689c-544p6\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:09.575544 kubelet[2898]: I0114 23:56:09.575518 2898 status_manager.go:890] "Failed to get status for pod" podUID="2600c830ca674ed87b79b96ba000ed32" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:09.575921 kubelet[2898]: I0114 23:56:09.575888 2898 status_manager.go:890] "Failed to get status for pod" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-7dcd859c48-hg526\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:09.576218 kubelet[2898]: I0114 23:56:09.576184 2898 status_manager.go:890] "Failed to get status for pod" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" pod="calico-system/goldmane-666569f655-5sxpk" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/goldmane-666569f655-5sxpk\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:09.576496 kubelet[2898]: I0114 23:56:09.576472 2898 status_manager.go:890] "Failed to get status for pod" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" pod="calico-system/whisker-7f94899ccb-pnwbr" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/whisker-7f94899ccb-pnwbr\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:09.576790 kubelet[2898]: I0114 23:56:09.576713 2898 status_manager.go:890] "Failed to get status for pod" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-49kdx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:09.577954 kubelet[2898]: I0114 23:56:09.577013 2898 status_manager.go:890] "Failed to get status for pod" podUID="0b87770b8d26d1b1663c3229f1382cec" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:09.578374 kubelet[2898]: I0114 23:56:09.578328 2898 status_manager.go:890] "Failed to get status for pod" podUID="a0fb221571f90a6b03ac373000837dfe" pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:09.579564 kubelet[2898]: I0114 23:56:09.579528 2898 status_manager.go:890] "Failed to get status for pod" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" pod="calico-system/csi-node-driver-2lqxs" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-2lqxs\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:09.580316 kubelet[2898]: I0114 23:56:09.579882 2898 status_manager.go:890] "Failed to get status for pod" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-7cd9b5689c-544p6\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:09.580790 kubelet[2898]: I0114 23:56:09.580755 2898 status_manager.go:890] "Failed to get status for pod" podUID="2600c830ca674ed87b79b96ba000ed32" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:09.581080 kubelet[2898]: I0114 23:56:09.581052 2898 status_manager.go:890] "Failed to get status for pod" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-7dcd859c48-hg526\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:09.581521 kubelet[2898]: I0114 23:56:09.581324 2898 status_manager.go:890] "Failed to get status for pod" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-2glxx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:09.581636 kubelet[2898]: I0114 23:56:09.581605 2898 status_manager.go:890] "Failed to get status for pod" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" pod="calico-system/goldmane-666569f655-5sxpk" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/goldmane-666569f655-5sxpk\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:09.581893 kubelet[2898]: I0114 23:56:09.581868 2898 status_manager.go:890] "Failed to get status for pod" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" pod="calico-system/whisker-7f94899ccb-pnwbr" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/whisker-7f94899ccb-pnwbr\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:09.582103 kubelet[2898]: I0114 23:56:09.582082 2898 status_manager.go:890] "Failed to get status for pod" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-49kdx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:09.582305 kubelet[2898]: I0114 23:56:09.582259 2898 status_manager.go:890] "Failed to get status for pod" podUID="0b87770b8d26d1b1663c3229f1382cec" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:09.582732 kubelet[2898]: I0114 23:56:09.582525 2898 status_manager.go:890] "Failed to get status for pod" podUID="a0fb221571f90a6b03ac373000837dfe" pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:09.582795 kubelet[2898]: I0114 23:56:09.582777 2898 status_manager.go:890] "Failed to get status for pod" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" pod="calico-system/csi-node-driver-2lqxs" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-2lqxs\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:10.066183 kubelet[2898]: I0114 23:56:10.066156 2898 scope.go:117] "RemoveContainer" containerID="0d63deb693f9c4fa3e371c9cd56c33858bf3ecfa818ae551577f6b5eed2b3d52" Jan 14 23:56:10.067544 kubelet[2898]: E0114 23:56:10.066304 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-ci-4515-1-0-n-1d3be4f164_kube-system(0b87770b8d26d1b1663c3229f1382cec)\"" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" podUID="0b87770b8d26d1b1663c3229f1382cec" Jan 14 23:56:10.067544 kubelet[2898]: I0114 23:56:10.066447 2898 status_manager.go:890] "Failed to get status for pod" podUID="a0fb221571f90a6b03ac373000837dfe" pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:10.067544 kubelet[2898]: I0114 23:56:10.066685 2898 status_manager.go:890] "Failed to get status for pod" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" pod="calico-system/csi-node-driver-2lqxs" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-2lqxs\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:10.067544 kubelet[2898]: I0114 23:56:10.066855 2898 status_manager.go:890] "Failed to get status for pod" podUID="2600c830ca674ed87b79b96ba000ed32" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:10.067544 kubelet[2898]: I0114 23:56:10.067036 2898 status_manager.go:890] "Failed to get status for pod" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-7dcd859c48-hg526\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:10.067544 kubelet[2898]: I0114 23:56:10.067195 2898 status_manager.go:890] "Failed to get status for pod" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-2glxx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:10.067945 kubelet[2898]: I0114 23:56:10.067912 2898 status_manager.go:890] "Failed to get status for pod" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-7cd9b5689c-544p6\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:10.068185 kubelet[2898]: I0114 23:56:10.068145 2898 status_manager.go:890] "Failed to get status for pod" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" pod="calico-system/goldmane-666569f655-5sxpk" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/goldmane-666569f655-5sxpk\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:10.068428 kubelet[2898]: I0114 23:56:10.068406 2898 status_manager.go:890] "Failed to get status for pod" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" pod="calico-system/whisker-7f94899ccb-pnwbr" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/whisker-7f94899ccb-pnwbr\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:10.068617 kubelet[2898]: I0114 23:56:10.068597 2898 status_manager.go:890] "Failed to get status for pod" podUID="0b87770b8d26d1b1663c3229f1382cec" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:10.068873 kubelet[2898]: I0114 23:56:10.068839 2898 status_manager.go:890] "Failed to get status for pod" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-49kdx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:10.632294 kubelet[2898]: E0114 23:56:10.632213 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:56:10.632494 kubelet[2898]: E0114 23:56:10.632305 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:56:11.632690 kubelet[2898]: E0114 23:56:11.632646 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:56:11.895983 kubelet[2898]: I0114 23:56:11.895868 2898 scope.go:117] "RemoveContainer" containerID="0d63deb693f9c4fa3e371c9cd56c33858bf3ecfa818ae551577f6b5eed2b3d52" Jan 14 23:56:11.896095 kubelet[2898]: E0114 23:56:11.896028 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-ci-4515-1-0-n-1d3be4f164_kube-system(0b87770b8d26d1b1663c3229f1382cec)\"" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" podUID="0b87770b8d26d1b1663c3229f1382cec" Jan 14 23:56:13.632599 kubelet[2898]: E0114 23:56:13.632548 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:56:16.087784 kubelet[2898]: E0114 23:56:16.087703 2898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" interval="7s" Jan 14 23:56:16.088183 kubelet[2898]: E0114 23:56:16.087661 2898 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/events/goldmane-666569f655-5sxpk.188abdce32f1f5c3\": dial tcp 10.0.22.230:6443: connect: connection refused" event="&Event{ObjectMeta:{goldmane-666569f655-5sxpk.188abdce32f1f5c3 calico-system 1958 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:goldmane-666569f655-5sxpk,UID:fcec49c5-6358-46d9-9922-8a81fb4bafd8,APIVersion:v1,ResourceVersion:1054,FieldPath:spec.containers{goldmane},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4515-1-0-n-1d3be4f164,},FirstTimestamp:2026-01-14 23:48:17 +0000 UTC,LastTimestamp:2026-01-14 23:52:09.632468853 +0000 UTC m=+411.082007407,Count:15,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-n-1d3be4f164,}" Jan 14 23:56:16.088183 kubelet[2898]: E0114 23:56:16.087845 2898 event.go:307] "Unable to write event (retry limit exceeded!)" event="&Event{ObjectMeta:{goldmane-666569f655-5sxpk.188abe045c350f75 calico-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:goldmane-666569f655-5sxpk,UID:fcec49c5-6358-46d9-9922-8a81fb4bafd8,APIVersion:v1,ResourceVersion:1054,FieldPath:spec.containers{goldmane},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4515-1-0-n-1d3be4f164,},FirstTimestamp:2026-01-14 23:52:09.632468853 +0000 UTC m=+411.082007407,LastTimestamp:2026-01-14 23:52:09.632468853 +0000 UTC m=+411.082007407,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-n-1d3be4f164,}" Jan 14 23:56:16.088564 kubelet[2898]: E0114 23:56:16.088413 2898 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/events/goldmane-666569f655-5sxpk.188abdce32f243bc\": dial tcp 10.0.22.230:6443: connect: connection refused" event="&Event{ObjectMeta:{goldmane-666569f655-5sxpk.188abdce32f243bc calico-system 1959 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:goldmane-666569f655-5sxpk,UID:fcec49c5-6358-46d9-9922-8a81fb4bafd8,APIVersion:v1,ResourceVersion:1054,FieldPath:spec.containers{goldmane},},Reason:Failed,Message:Error: ImagePullBackOff,Source:EventSource{Component:kubelet,Host:ci-4515-1-0-n-1d3be4f164,},FirstTimestamp:2026-01-14 23:48:17 +0000 UTC,LastTimestamp:2026-01-14 23:52:09.632488853 +0000 UTC m=+411.082027407,Count:15,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-n-1d3be4f164,}" Jan 14 23:56:16.632911 kubelet[2898]: E0114 23:56:16.632389 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:56:18.631335 kubelet[2898]: E0114 23:56:18.630760 2898 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/events/goldmane-666569f655-5sxpk.188abdce32f243bc\": dial tcp 10.0.22.230:6443: connect: connection refused" event="&Event{ObjectMeta:{goldmane-666569f655-5sxpk.188abdce32f243bc calico-system 1959 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:goldmane-666569f655-5sxpk,UID:fcec49c5-6358-46d9-9922-8a81fb4bafd8,APIVersion:v1,ResourceVersion:1054,FieldPath:spec.containers{goldmane},},Reason:Failed,Message:Error: ImagePullBackOff,Source:EventSource{Component:kubelet,Host:ci-4515-1-0-n-1d3be4f164,},FirstTimestamp:2026-01-14 23:48:17 +0000 UTC,LastTimestamp:2026-01-14 23:52:09.632488853 +0000 UTC m=+411.082027407,Count:15,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-n-1d3be4f164,}" Jan 14 23:56:18.631738 kubelet[2898]: I0114 23:56:18.631398 2898 status_manager.go:890] "Failed to get status for pod" podUID="0b87770b8d26d1b1663c3229f1382cec" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:18.631886 kubelet[2898]: I0114 23:56:18.631859 2898 status_manager.go:890] "Failed to get status for pod" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-49kdx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:18.632154 kubelet[2898]: I0114 23:56:18.632125 2898 status_manager.go:890] "Failed to get status for pod" podUID="a0fb221571f90a6b03ac373000837dfe" pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:18.632671 kubelet[2898]: I0114 23:56:18.632637 2898 status_manager.go:890] "Failed to get status for pod" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" pod="calico-system/csi-node-driver-2lqxs" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-2lqxs\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:18.632964 kubelet[2898]: I0114 23:56:18.632941 2898 status_manager.go:890] "Failed to get status for pod" podUID="2600c830ca674ed87b79b96ba000ed32" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:18.633413 kubelet[2898]: I0114 23:56:18.633205 2898 status_manager.go:890] "Failed to get status for pod" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-7dcd859c48-hg526\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:18.633531 kubelet[2898]: I0114 23:56:18.633503 2898 status_manager.go:890] "Failed to get status for pod" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-2glxx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:18.633795 kubelet[2898]: I0114 23:56:18.633767 2898 status_manager.go:890] "Failed to get status for pod" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-7cd9b5689c-544p6\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:18.634231 kubelet[2898]: I0114 23:56:18.634042 2898 status_manager.go:890] "Failed to get status for pod" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" pod="calico-system/goldmane-666569f655-5sxpk" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/goldmane-666569f655-5sxpk\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:18.634322 kubelet[2898]: I0114 23:56:18.634292 2898 status_manager.go:890] "Failed to get status for pod" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" pod="calico-system/whisker-7f94899ccb-pnwbr" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/whisker-7f94899ccb-pnwbr\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:19.922313 kubelet[2898]: E0114 23:56:19.922278 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:56:19Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:56:19Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:56:19Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:56:19Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"ci-4515-1-0-n-1d3be4f164\": Patch \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164/status?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:19.923443 kubelet[2898]: E0114 23:56:19.922895 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:19.923443 kubelet[2898]: E0114 23:56:19.923055 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:19.923443 kubelet[2898]: E0114 23:56:19.923219 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:19.923443 kubelet[2898]: E0114 23:56:19.923405 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:19.923443 kubelet[2898]: E0114 23:56:19.923420 2898 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Jan 14 23:56:21.632811 kubelet[2898]: E0114 23:56:21.632729 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:56:22.632334 kubelet[2898]: I0114 23:56:22.632231 2898 scope.go:117] "RemoveContainer" containerID="0d63deb693f9c4fa3e371c9cd56c33858bf3ecfa818ae551577f6b5eed2b3d52" Jan 14 23:56:22.632506 kubelet[2898]: E0114 23:56:22.632469 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:56:22.632585 kubelet[2898]: E0114 23:56:22.632559 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:56:22.632673 kubelet[2898]: E0114 23:56:22.632580 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-ci-4515-1-0-n-1d3be4f164_kube-system(0b87770b8d26d1b1663c3229f1382cec)\"" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" podUID="0b87770b8d26d1b1663c3229f1382cec" Jan 14 23:56:23.088823 kubelet[2898]: E0114 23:56:23.088781 2898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" interval="7s" Jan 14 23:56:25.632505 kubelet[2898]: E0114 23:56:25.632459 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:56:25.632888 kubelet[2898]: E0114 23:56:25.632759 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:56:28.632680 kubelet[2898]: I0114 23:56:28.632538 2898 status_manager.go:890] "Failed to get status for pod" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" pod="calico-system/goldmane-666569f655-5sxpk" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/goldmane-666569f655-5sxpk\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:28.633161 kubelet[2898]: I0114 23:56:28.632759 2898 status_manager.go:890] "Failed to get status for pod" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" pod="calico-system/whisker-7f94899ccb-pnwbr" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/whisker-7f94899ccb-pnwbr\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:28.633161 kubelet[2898]: I0114 23:56:28.632912 2898 status_manager.go:890] "Failed to get status for pod" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-49kdx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:28.633161 kubelet[2898]: E0114 23:56:28.632924 2898 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/events/goldmane-666569f655-5sxpk.188abdce32f243bc\": dial tcp 10.0.22.230:6443: connect: connection refused" event="&Event{ObjectMeta:{goldmane-666569f655-5sxpk.188abdce32f243bc calico-system 1959 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:goldmane-666569f655-5sxpk,UID:fcec49c5-6358-46d9-9922-8a81fb4bafd8,APIVersion:v1,ResourceVersion:1054,FieldPath:spec.containers{goldmane},},Reason:Failed,Message:Error: ImagePullBackOff,Source:EventSource{Component:kubelet,Host:ci-4515-1-0-n-1d3be4f164,},FirstTimestamp:2026-01-14 23:48:17 +0000 UTC,LastTimestamp:2026-01-14 23:52:09.632488853 +0000 UTC m=+411.082027407,Count:15,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-n-1d3be4f164,}" Jan 14 23:56:28.633161 kubelet[2898]: I0114 23:56:28.633052 2898 status_manager.go:890] "Failed to get status for pod" podUID="0b87770b8d26d1b1663c3229f1382cec" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:28.633324 kubelet[2898]: I0114 23:56:28.633197 2898 status_manager.go:890] "Failed to get status for pod" podUID="a0fb221571f90a6b03ac373000837dfe" pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:28.633414 kubelet[2898]: I0114 23:56:28.633361 2898 status_manager.go:890] "Failed to get status for pod" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" pod="calico-system/csi-node-driver-2lqxs" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-2lqxs\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:28.633610 kubelet[2898]: I0114 23:56:28.633573 2898 status_manager.go:890] "Failed to get status for pod" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-7cd9b5689c-544p6\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:28.634087 kubelet[2898]: I0114 23:56:28.634056 2898 status_manager.go:890] "Failed to get status for pod" podUID="2600c830ca674ed87b79b96ba000ed32" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:28.634313 kubelet[2898]: I0114 23:56:28.634286 2898 status_manager.go:890] "Failed to get status for pod" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-7dcd859c48-hg526\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:28.634514 kubelet[2898]: I0114 23:56:28.634480 2898 status_manager.go:890] "Failed to get status for pod" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-2glxx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:28.635225 kubelet[2898]: E0114 23:56:28.635087 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:56:30.090613 kubelet[2898]: E0114 23:56:30.090501 2898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" interval="7s" Jan 14 23:56:30.173948 kubelet[2898]: E0114 23:56:30.173881 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:56:30Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:56:30Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:56:30Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:56:30Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"ci-4515-1-0-n-1d3be4f164\": Patch \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164/status?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:30.174153 kubelet[2898]: E0114 23:56:30.174123 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:30.174367 kubelet[2898]: E0114 23:56:30.174349 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:30.174841 kubelet[2898]: E0114 23:56:30.174570 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:30.174841 kubelet[2898]: E0114 23:56:30.174802 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:30.174841 kubelet[2898]: E0114 23:56:30.174817 2898 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Jan 14 23:56:33.632735 kubelet[2898]: E0114 23:56:33.632618 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:56:36.632546 kubelet[2898]: I0114 23:56:36.632498 2898 scope.go:117] "RemoveContainer" containerID="0d63deb693f9c4fa3e371c9cd56c33858bf3ecfa818ae551577f6b5eed2b3d52" Jan 14 23:56:36.632942 kubelet[2898]: E0114 23:56:36.632650 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver pod=kube-apiserver-ci-4515-1-0-n-1d3be4f164_kube-system(0b87770b8d26d1b1663c3229f1382cec)\"" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" podUID="0b87770b8d26d1b1663c3229f1382cec" Jan 14 23:56:36.633726 kubelet[2898]: E0114 23:56:36.633669 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:56:36.634114 kubelet[2898]: E0114 23:56:36.634069 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:56:37.091739 kubelet[2898]: E0114 23:56:37.091658 2898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" interval="7s" Jan 14 23:56:37.632550 kubelet[2898]: E0114 23:56:37.632487 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:56:38.632523 kubelet[2898]: I0114 23:56:38.632431 2898 status_manager.go:890] "Failed to get status for pod" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" pod="calico-system/goldmane-666569f655-5sxpk" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/goldmane-666569f655-5sxpk\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:38.632820 kubelet[2898]: I0114 23:56:38.632772 2898 status_manager.go:890] "Failed to get status for pod" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" pod="calico-system/whisker-7f94899ccb-pnwbr" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/whisker-7f94899ccb-pnwbr\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:38.633071 kubelet[2898]: I0114 23:56:38.632946 2898 status_manager.go:890] "Failed to get status for pod" podUID="0b87770b8d26d1b1663c3229f1382cec" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:38.633110 kubelet[2898]: I0114 23:56:38.633093 2898 status_manager.go:890] "Failed to get status for pod" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-49kdx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:38.633313 kubelet[2898]: I0114 23:56:38.633258 2898 status_manager.go:890] "Failed to get status for pod" podUID="a0fb221571f90a6b03ac373000837dfe" pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:38.633436 kubelet[2898]: E0114 23:56:38.633331 2898 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/events/goldmane-666569f655-5sxpk.188abdce32f243bc\": dial tcp 10.0.22.230:6443: connect: connection refused" event="&Event{ObjectMeta:{goldmane-666569f655-5sxpk.188abdce32f243bc calico-system 1959 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:goldmane-666569f655-5sxpk,UID:fcec49c5-6358-46d9-9922-8a81fb4bafd8,APIVersion:v1,ResourceVersion:1054,FieldPath:spec.containers{goldmane},},Reason:Failed,Message:Error: ImagePullBackOff,Source:EventSource{Component:kubelet,Host:ci-4515-1-0-n-1d3be4f164,},FirstTimestamp:2026-01-14 23:48:17 +0000 UTC,LastTimestamp:2026-01-14 23:52:09.632488853 +0000 UTC m=+411.082027407,Count:15,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-n-1d3be4f164,}" Jan 14 23:56:38.633513 kubelet[2898]: I0114 23:56:38.633451 2898 status_manager.go:890] "Failed to get status for pod" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" pod="calico-system/csi-node-driver-2lqxs" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-2lqxs\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:38.633695 kubelet[2898]: I0114 23:56:38.633649 2898 status_manager.go:890] "Failed to get status for pod" podUID="2600c830ca674ed87b79b96ba000ed32" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:38.634003 kubelet[2898]: I0114 23:56:38.633902 2898 status_manager.go:890] "Failed to get status for pod" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-7dcd859c48-hg526\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:38.634181 kubelet[2898]: I0114 23:56:38.634142 2898 status_manager.go:890] "Failed to get status for pod" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-2glxx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:38.634528 kubelet[2898]: I0114 23:56:38.634466 2898 status_manager.go:890] "Failed to get status for pod" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-7cd9b5689c-544p6\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:40.457244 kubelet[2898]: E0114 23:56:40.457190 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:56:40Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:56:40Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:56:40Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:56:40Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"ci-4515-1-0-n-1d3be4f164\": Patch \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164/status?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:40.457787 kubelet[2898]: E0114 23:56:40.457490 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:40.457787 kubelet[2898]: E0114 23:56:40.457754 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:40.458233 kubelet[2898]: E0114 23:56:40.457985 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:40.458233 kubelet[2898]: E0114 23:56:40.458196 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:40.458233 kubelet[2898]: E0114 23:56:40.458210 2898 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Jan 14 23:56:40.633058 kubelet[2898]: E0114 23:56:40.633013 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:56:43.632775 kubelet[2898]: E0114 23:56:43.632726 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:56:44.092822 kubelet[2898]: E0114 23:56:44.092765 2898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" interval="7s" Jan 14 23:56:46.631998 kubelet[2898]: E0114 23:56:46.631920 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:56:48.632369 kubelet[2898]: I0114 23:56:48.632322 2898 scope.go:117] "RemoveContainer" containerID="0d63deb693f9c4fa3e371c9cd56c33858bf3ecfa818ae551577f6b5eed2b3d52" Jan 14 23:56:48.632767 kubelet[2898]: I0114 23:56:48.632737 2898 status_manager.go:890] "Failed to get status for pod" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" pod="calico-system/csi-node-driver-2lqxs" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-2lqxs\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:48.633004 kubelet[2898]: I0114 23:56:48.632968 2898 status_manager.go:890] "Failed to get status for pod" podUID="a0fb221571f90a6b03ac373000837dfe" pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:48.633161 kubelet[2898]: I0114 23:56:48.633139 2898 status_manager.go:890] "Failed to get status for pod" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-7dcd859c48-hg526\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:48.633358 kubelet[2898]: I0114 23:56:48.633338 2898 status_manager.go:890] "Failed to get status for pod" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-2glxx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:48.633528 kubelet[2898]: I0114 23:56:48.633507 2898 status_manager.go:890] "Failed to get status for pod" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-7cd9b5689c-544p6\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:48.633684 kubelet[2898]: I0114 23:56:48.633663 2898 status_manager.go:890] "Failed to get status for pod" podUID="2600c830ca674ed87b79b96ba000ed32" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:48.633786 kubelet[2898]: E0114 23:56:48.633693 2898 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/events/goldmane-666569f655-5sxpk.188abdce32f243bc\": dial tcp 10.0.22.230:6443: connect: connection refused" event="&Event{ObjectMeta:{goldmane-666569f655-5sxpk.188abdce32f243bc calico-system 1959 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:goldmane-666569f655-5sxpk,UID:fcec49c5-6358-46d9-9922-8a81fb4bafd8,APIVersion:v1,ResourceVersion:1054,FieldPath:spec.containers{goldmane},},Reason:Failed,Message:Error: ImagePullBackOff,Source:EventSource{Component:kubelet,Host:ci-4515-1-0-n-1d3be4f164,},FirstTimestamp:2026-01-14 23:48:17 +0000 UTC,LastTimestamp:2026-01-14 23:52:09.632488853 +0000 UTC m=+411.082027407,Count:15,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-n-1d3be4f164,}" Jan 14 23:56:48.633867 kubelet[2898]: I0114 23:56:48.633815 2898 status_manager.go:890] "Failed to get status for pod" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" pod="calico-system/whisker-7f94899ccb-pnwbr" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/whisker-7f94899ccb-pnwbr\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:48.633998 kubelet[2898]: I0114 23:56:48.633978 2898 status_manager.go:890] "Failed to get status for pod" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" pod="calico-system/goldmane-666569f655-5sxpk" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/goldmane-666569f655-5sxpk\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:48.634160 kubelet[2898]: I0114 23:56:48.634142 2898 status_manager.go:890] "Failed to get status for pod" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-49kdx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:48.634420 kubelet[2898]: I0114 23:56:48.634393 2898 status_manager.go:890] "Failed to get status for pod" podUID="0b87770b8d26d1b1663c3229f1382cec" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:56:48.636012 containerd[1695]: time="2026-01-14T23:56:48.635983991Z" level=info msg="CreateContainer within sandbox \"73955e808bb581e4bdc7aa2d5959ccdb604f9ee318c1f32b7c04db31e6ab18b5\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:5,}" Jan 14 23:56:48.646707 containerd[1695]: time="2026-01-14T23:56:48.646653264Z" level=info msg="Container b9a5d43fe553419c607d7207a89731fe64e8bf435e8935b77cff33dee5006173: CDI devices from CRI Config.CDIDevices: []" Jan 14 23:56:48.649172 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4020398603.mount: Deactivated successfully. Jan 14 23:56:48.654694 containerd[1695]: time="2026-01-14T23:56:48.654657689Z" level=info msg="CreateContainer within sandbox \"73955e808bb581e4bdc7aa2d5959ccdb604f9ee318c1f32b7c04db31e6ab18b5\" for &ContainerMetadata{Name:kube-apiserver,Attempt:5,} returns container id \"b9a5d43fe553419c607d7207a89731fe64e8bf435e8935b77cff33dee5006173\"" Jan 14 23:56:48.655119 containerd[1695]: time="2026-01-14T23:56:48.655088570Z" level=info msg="StartContainer for \"b9a5d43fe553419c607d7207a89731fe64e8bf435e8935b77cff33dee5006173\"" Jan 14 23:56:48.656244 containerd[1695]: time="2026-01-14T23:56:48.656219933Z" level=info msg="connecting to shim b9a5d43fe553419c607d7207a89731fe64e8bf435e8935b77cff33dee5006173" address="unix:///run/containerd/s/7e84ac729b0b3ea71e7a94c01c77c659ed3788ac652f51d62699bc5cf53d0528" protocol=ttrpc version=3 Jan 14 23:56:48.683507 systemd[1]: Started cri-containerd-b9a5d43fe553419c607d7207a89731fe64e8bf435e8935b77cff33dee5006173.scope - libcontainer container b9a5d43fe553419c607d7207a89731fe64e8bf435e8935b77cff33dee5006173. Jan 14 23:56:48.695000 audit: BPF prog-id=309 op=LOAD Jan 14 23:56:48.697393 kernel: audit: type=1334 audit(1768435008.695:855): prog-id=309 op=LOAD Jan 14 23:56:48.695000 audit: BPF prog-id=310 op=LOAD Jan 14 23:56:48.695000 audit[6281]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400018c180 a2=98 a3=0 items=0 ppid=2573 pid=6281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:56:48.702024 kernel: audit: type=1334 audit(1768435008.695:856): prog-id=310 op=LOAD Jan 14 23:56:48.702081 kernel: audit: type=1300 audit(1768435008.695:856): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400018c180 a2=98 a3=0 items=0 ppid=2573 pid=6281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:56:48.695000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239613564343366653535333431396336303764373230376138393733 Jan 14 23:56:48.705596 kernel: audit: type=1327 audit(1768435008.695:856): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239613564343366653535333431396336303764373230376138393733 Jan 14 23:56:48.696000 audit: BPF prog-id=310 op=UNLOAD Jan 14 23:56:48.706652 kernel: audit: type=1334 audit(1768435008.696:857): prog-id=310 op=UNLOAD Jan 14 23:56:48.696000 audit[6281]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=6281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:56:48.710188 kernel: audit: type=1300 audit(1768435008.696:857): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=6281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:56:48.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239613564343366653535333431396336303764373230376138393733 Jan 14 23:56:48.713599 kernel: audit: type=1327 audit(1768435008.696:857): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239613564343366653535333431396336303764373230376138393733 Jan 14 23:56:48.696000 audit: BPF prog-id=311 op=LOAD Jan 14 23:56:48.714554 kernel: audit: type=1334 audit(1768435008.696:858): prog-id=311 op=LOAD Jan 14 23:56:48.714672 kernel: audit: type=1300 audit(1768435008.696:858): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400018c3e8 a2=98 a3=0 items=0 ppid=2573 pid=6281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:56:48.696000 audit[6281]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400018c3e8 a2=98 a3=0 items=0 ppid=2573 pid=6281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:56:48.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239613564343366653535333431396336303764373230376138393733 Jan 14 23:56:48.721556 kernel: audit: type=1327 audit(1768435008.696:858): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239613564343366653535333431396336303764373230376138393733 Jan 14 23:56:48.696000 audit: BPF prog-id=312 op=LOAD Jan 14 23:56:48.696000 audit[6281]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400018c168 a2=98 a3=0 items=0 ppid=2573 pid=6281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:56:48.696000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239613564343366653535333431396336303764373230376138393733 Jan 14 23:56:48.697000 audit: BPF prog-id=312 op=UNLOAD Jan 14 23:56:48.697000 audit[6281]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=6281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:56:48.697000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239613564343366653535333431396336303764373230376138393733 Jan 14 23:56:48.697000 audit: BPF prog-id=311 op=UNLOAD Jan 14 23:56:48.697000 audit[6281]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2573 pid=6281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:56:48.697000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239613564343366653535333431396336303764373230376138393733 Jan 14 23:56:48.697000 audit: BPF prog-id=313 op=LOAD Jan 14 23:56:48.697000 audit[6281]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400018c648 a2=98 a3=0 items=0 ppid=2573 pid=6281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 14 23:56:48.697000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6239613564343366653535333431396336303764373230376138393733 Jan 14 23:56:48.742943 containerd[1695]: time="2026-01-14T23:56:48.742853999Z" level=info msg="StartContainer for \"b9a5d43fe553419c607d7207a89731fe64e8bf435e8935b77cff33dee5006173\" returns successfully" Jan 14 23:56:49.632320 kubelet[2898]: E0114 23:56:49.632274 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:56:50.632190 kubelet[2898]: E0114 23:56:50.632139 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:56:51.632839 kubelet[2898]: E0114 23:56:51.632787 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:56:52.633073 kubelet[2898]: E0114 23:56:52.633027 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:56:58.633769 kubelet[2898]: E0114 23:56:58.633726 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:56:59.152815 kubelet[2898]: I0114 23:56:59.152495 2898 status_manager.go:890] "Failed to get status for pod" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" pod="calico-system/goldmane-666569f655-5sxpk" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/goldmane-666569f655-5sxpk\": net/http: TLS handshake timeout" Jan 14 23:57:00.571312 kubelet[2898]: E0114 23:57:00.571200 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:56:50Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:56:50Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:56:50Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:56:50Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"ci-4515-1-0-n-1d3be4f164\": Patch \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164/status?timeout=10s\": net/http: request canceled while waiting for connection (Client.Timeout exceeded while awaiting headers)" Jan 14 23:57:01.094697 kubelet[2898]: E0114 23:57:01.094607 2898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": context deadline exceeded" interval="7s" Jan 14 23:57:01.632124 kubelet[2898]: E0114 23:57:01.632069 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:57:01.632840 kubelet[2898]: E0114 23:57:01.632813 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:57:04.632554 kubelet[2898]: E0114 23:57:04.632480 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:57:06.634146 kubelet[2898]: E0114 23:57:06.633996 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:57:07.632232 kubelet[2898]: E0114 23:57:07.632165 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:57:08.635383 kubelet[2898]: E0114 23:57:08.635158 2898 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/events/goldmane-666569f655-5sxpk.188abdce32f243bc\": net/http: TLS handshake timeout" event="&Event{ObjectMeta:{goldmane-666569f655-5sxpk.188abdce32f243bc calico-system 1959 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:goldmane-666569f655-5sxpk,UID:fcec49c5-6358-46d9-9922-8a81fb4bafd8,APIVersion:v1,ResourceVersion:1054,FieldPath:spec.containers{goldmane},},Reason:Failed,Message:Error: ImagePullBackOff,Source:EventSource{Component:kubelet,Host:ci-4515-1-0-n-1d3be4f164,},FirstTimestamp:2026-01-14 23:48:17 +0000 UTC,LastTimestamp:2026-01-14 23:52:09.632488853 +0000 UTC m=+411.082027407,Count:15,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-n-1d3be4f164,}" Jan 14 23:57:09.153689 kubelet[2898]: I0114 23:57:09.153624 2898 status_manager.go:890] "Failed to get status for pod" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" pod="calico-system/whisker-7f94899ccb-pnwbr" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/whisker-7f94899ccb-pnwbr\": net/http: TLS handshake timeout" Jan 14 23:57:09.896027 systemd[1]: cri-containerd-b9a5d43fe553419c607d7207a89731fe64e8bf435e8935b77cff33dee5006173.scope: Deactivated successfully. Jan 14 23:57:09.896378 systemd[1]: cri-containerd-b9a5d43fe553419c607d7207a89731fe64e8bf435e8935b77cff33dee5006173.scope: Consumed 1.253s CPU time, 24.2M memory peak. Jan 14 23:57:09.897599 containerd[1695]: time="2026-01-14T23:57:09.897558121Z" level=info msg="received container exit event container_id:\"b9a5d43fe553419c607d7207a89731fe64e8bf435e8935b77cff33dee5006173\" id:\"b9a5d43fe553419c607d7207a89731fe64e8bf435e8935b77cff33dee5006173\" pid:6294 exit_status:255 exited_at:{seconds:1768435029 nanos:897357680}" Jan 14 23:57:09.899000 audit: BPF prog-id=309 op=UNLOAD Jan 14 23:57:09.901886 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 14 23:57:09.901966 kernel: audit: type=1334 audit(1768435029.899:863): prog-id=309 op=UNLOAD Jan 14 23:57:09.901987 kernel: audit: type=1334 audit(1768435029.899:864): prog-id=313 op=UNLOAD Jan 14 23:57:09.899000 audit: BPF prog-id=313 op=UNLOAD Jan 14 23:57:09.917877 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-b9a5d43fe553419c607d7207a89731fe64e8bf435e8935b77cff33dee5006173-rootfs.mount: Deactivated successfully. Jan 14 23:57:10.193707 kubelet[2898]: I0114 23:57:10.193538 2898 scope.go:117] "RemoveContainer" containerID="0d63deb693f9c4fa3e371c9cd56c33858bf3ecfa818ae551577f6b5eed2b3d52" Jan 14 23:57:10.194933 kubelet[2898]: I0114 23:57:10.194696 2898 scope.go:117] "RemoveContainer" containerID="b9a5d43fe553419c607d7207a89731fe64e8bf435e8935b77cff33dee5006173" Jan 14 23:57:10.194933 kubelet[2898]: E0114 23:57:10.194845 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-ci-4515-1-0-n-1d3be4f164_kube-system(0b87770b8d26d1b1663c3229f1382cec)\"" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" podUID="0b87770b8d26d1b1663c3229f1382cec" Jan 14 23:57:10.195405 containerd[1695]: time="2026-01-14T23:57:10.195374193Z" level=info msg="RemoveContainer for \"0d63deb693f9c4fa3e371c9cd56c33858bf3ecfa818ae551577f6b5eed2b3d52\"" Jan 14 23:57:10.203821 containerd[1695]: time="2026-01-14T23:57:10.203771219Z" level=info msg="RemoveContainer for \"0d63deb693f9c4fa3e371c9cd56c33858bf3ecfa818ae551577f6b5eed2b3d52\" returns successfully" Jan 14 23:57:10.897068 kubelet[2898]: I0114 23:57:10.896598 2898 status_manager.go:890] "Failed to get status for pod" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-49kdx\": dial tcp 10.0.22.230:6443: connect: connection refused - error from a previous attempt: read tcp 10.0.22.230:45182->10.0.22.230:6443: read: connection reset by peer" Jan 14 23:57:10.897068 kubelet[2898]: E0114 23:57:10.896666 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": context deadline exceeded - error from a previous attempt: read tcp 10.0.22.230:44158->10.0.22.230:6443: read: connection reset by peer" Jan 14 23:57:10.897256 kubelet[2898]: E0114 23:57:10.897180 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:10.898629 kubelet[2898]: E0114 23:57:10.897697 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:10.898629 kubelet[2898]: I0114 23:57:10.897724 2898 status_manager.go:890] "Failed to get status for pod" podUID="0b87770b8d26d1b1663c3229f1382cec" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:10.898629 kubelet[2898]: E0114 23:57:10.897922 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:10.898629 kubelet[2898]: E0114 23:57:10.897937 2898 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Jan 14 23:57:10.898629 kubelet[2898]: I0114 23:57:10.897991 2898 status_manager.go:890] "Failed to get status for pod" podUID="a0fb221571f90a6b03ac373000837dfe" pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:10.898629 kubelet[2898]: E0114 23:57:10.898070 2898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused - error from a previous attempt: read tcp 10.0.22.230:44340->10.0.22.230:6443: read: connection reset by peer" interval="7s" Jan 14 23:57:10.899087 kubelet[2898]: I0114 23:57:10.899046 2898 status_manager.go:890] "Failed to get status for pod" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" pod="calico-system/csi-node-driver-2lqxs" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-2lqxs\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:10.899344 kubelet[2898]: I0114 23:57:10.899304 2898 status_manager.go:890] "Failed to get status for pod" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-7cd9b5689c-544p6\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:10.899654 kubelet[2898]: I0114 23:57:10.899614 2898 status_manager.go:890] "Failed to get status for pod" podUID="2600c830ca674ed87b79b96ba000ed32" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:10.900314 kubelet[2898]: I0114 23:57:10.899886 2898 status_manager.go:890] "Failed to get status for pod" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-7dcd859c48-hg526\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:10.900965 kubelet[2898]: I0114 23:57:10.900892 2898 status_manager.go:890] "Failed to get status for pod" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-2glxx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:10.901963 kubelet[2898]: I0114 23:57:10.901236 2898 status_manager.go:890] "Failed to get status for pod" podUID="0b87770b8d26d1b1663c3229f1382cec" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:10.902246 kubelet[2898]: I0114 23:57:10.902208 2898 status_manager.go:890] "Failed to get status for pod" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-49kdx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:10.902553 kubelet[2898]: I0114 23:57:10.902512 2898 status_manager.go:890] "Failed to get status for pod" podUID="a0fb221571f90a6b03ac373000837dfe" pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:10.902832 kubelet[2898]: I0114 23:57:10.902794 2898 status_manager.go:890] "Failed to get status for pod" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" pod="calico-system/csi-node-driver-2lqxs" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-2lqxs\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:10.903107 kubelet[2898]: I0114 23:57:10.903068 2898 status_manager.go:890] "Failed to get status for pod" podUID="2600c830ca674ed87b79b96ba000ed32" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:10.903401 kubelet[2898]: I0114 23:57:10.903363 2898 status_manager.go:890] "Failed to get status for pod" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-7dcd859c48-hg526\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:10.903691 kubelet[2898]: I0114 23:57:10.903643 2898 status_manager.go:890] "Failed to get status for pod" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-2glxx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:10.904001 kubelet[2898]: I0114 23:57:10.903965 2898 status_manager.go:890] "Failed to get status for pod" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-7cd9b5689c-544p6\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:10.904285 kubelet[2898]: I0114 23:57:10.904232 2898 status_manager.go:890] "Failed to get status for pod" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" pod="calico-system/goldmane-666569f655-5sxpk" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/goldmane-666569f655-5sxpk\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:10.904609 kubelet[2898]: I0114 23:57:10.904575 2898 status_manager.go:890] "Failed to get status for pod" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" pod="calico-system/whisker-7f94899ccb-pnwbr" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/whisker-7f94899ccb-pnwbr\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:10.905134 kubelet[2898]: I0114 23:57:10.905099 2898 status_manager.go:890] "Failed to get status for pod" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-7cd9b5689c-544p6\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:10.906387 kubelet[2898]: I0114 23:57:10.906318 2898 status_manager.go:890] "Failed to get status for pod" podUID="2600c830ca674ed87b79b96ba000ed32" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:10.906780 kubelet[2898]: I0114 23:57:10.906698 2898 status_manager.go:890] "Failed to get status for pod" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-7dcd859c48-hg526\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:10.907116 kubelet[2898]: I0114 23:57:10.907079 2898 status_manager.go:890] "Failed to get status for pod" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-2glxx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:10.907687 kubelet[2898]: I0114 23:57:10.907491 2898 status_manager.go:890] "Failed to get status for pod" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" pod="calico-system/goldmane-666569f655-5sxpk" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/goldmane-666569f655-5sxpk\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:10.908028 kubelet[2898]: I0114 23:57:10.907992 2898 status_manager.go:890] "Failed to get status for pod" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" pod="calico-system/whisker-7f94899ccb-pnwbr" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/whisker-7f94899ccb-pnwbr\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:10.908324 kubelet[2898]: I0114 23:57:10.908260 2898 status_manager.go:890] "Failed to get status for pod" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-49kdx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:10.908612 kubelet[2898]: I0114 23:57:10.908579 2898 status_manager.go:890] "Failed to get status for pod" podUID="0b87770b8d26d1b1663c3229f1382cec" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:10.908946 kubelet[2898]: I0114 23:57:10.908873 2898 status_manager.go:890] "Failed to get status for pod" podUID="a0fb221571f90a6b03ac373000837dfe" pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:10.909144 kubelet[2898]: I0114 23:57:10.909120 2898 status_manager.go:890] "Failed to get status for pod" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" pod="calico-system/csi-node-driver-2lqxs" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-2lqxs\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:11.895897 kubelet[2898]: I0114 23:57:11.895704 2898 scope.go:117] "RemoveContainer" containerID="b9a5d43fe553419c607d7207a89731fe64e8bf435e8935b77cff33dee5006173" Jan 14 23:57:11.895897 kubelet[2898]: E0114 23:57:11.895852 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-ci-4515-1-0-n-1d3be4f164_kube-system(0b87770b8d26d1b1663c3229f1382cec)\"" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" podUID="0b87770b8d26d1b1663c3229f1382cec" Jan 14 23:57:11.896292 kubelet[2898]: I0114 23:57:11.896065 2898 status_manager.go:890] "Failed to get status for pod" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-49kdx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:11.896355 kubelet[2898]: I0114 23:57:11.896322 2898 status_manager.go:890] "Failed to get status for pod" podUID="0b87770b8d26d1b1663c3229f1382cec" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:11.896797 kubelet[2898]: I0114 23:57:11.896589 2898 status_manager.go:890] "Failed to get status for pod" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" pod="calico-system/csi-node-driver-2lqxs" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-2lqxs\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:11.896898 kubelet[2898]: I0114 23:57:11.896868 2898 status_manager.go:890] "Failed to get status for pod" podUID="a0fb221571f90a6b03ac373000837dfe" pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:11.897135 kubelet[2898]: I0114 23:57:11.897103 2898 status_manager.go:890] "Failed to get status for pod" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-7dcd859c48-hg526\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:11.897730 kubelet[2898]: I0114 23:57:11.897363 2898 status_manager.go:890] "Failed to get status for pod" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-2glxx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:11.897730 kubelet[2898]: I0114 23:57:11.897573 2898 status_manager.go:890] "Failed to get status for pod" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-7cd9b5689c-544p6\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:11.897834 kubelet[2898]: I0114 23:57:11.897797 2898 status_manager.go:890] "Failed to get status for pod" podUID="2600c830ca674ed87b79b96ba000ed32" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:11.898028 kubelet[2898]: I0114 23:57:11.898007 2898 status_manager.go:890] "Failed to get status for pod" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" pod="calico-system/whisker-7f94899ccb-pnwbr" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/whisker-7f94899ccb-pnwbr\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:11.898247 kubelet[2898]: I0114 23:57:11.898222 2898 status_manager.go:890] "Failed to get status for pod" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" pod="calico-system/goldmane-666569f655-5sxpk" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/goldmane-666569f655-5sxpk\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:13.632638 kubelet[2898]: E0114 23:57:13.632560 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" Jan 14 23:57:13.633261 kubelet[2898]: E0114 23:57:13.633198 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-2lqxs" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" Jan 14 23:57:13.670525 containerd[1695]: time="2026-01-14T23:57:13.670406118Z" level=info msg="container event discarded" container=146be48759f300999a13f366a5fca7d56c04434cbfef1155bdc146dc6ff8b1ae type=CONTAINER_STOPPED_EVENT Jan 14 23:57:13.931398 containerd[1695]: time="2026-01-14T23:57:13.931159357Z" level=info msg="container event discarded" container=dff0729402de1bf5093fca769b0a24b587f7578a2d449728d4aa31ed19490d92 type=CONTAINER_STOPPED_EVENT Jan 14 23:57:14.064692 containerd[1695]: time="2026-01-14T23:57:14.064599566Z" level=info msg="container event discarded" container=8bea3fa176370ca67966494230d459d02bf23240301737428cb9f8cbb74ea206 type=CONTAINER_STOPPED_EVENT Jan 14 23:57:14.568006 containerd[1695]: time="2026-01-14T23:57:14.567932747Z" level=info msg="container event discarded" container=990ffed14b969f2dbc3ff21c5a94068cc02591f987cb43de94d880554d6260f9 type=CONTAINER_CREATED_EVENT Jan 14 23:57:14.568006 containerd[1695]: time="2026-01-14T23:57:14.567986468Z" level=info msg="container event discarded" container=c9fffafbe337617857098109cac6f66dbf3fe9d21ab14d120c579b075e3953ae type=CONTAINER_CREATED_EVENT Jan 14 23:57:14.591573 containerd[1695]: time="2026-01-14T23:57:14.591516460Z" level=info msg="container event discarded" container=25a3686f02cbd1284d6dd1b7c58cf3c096d662d0d941bf4e3cec7659dc796efe type=CONTAINER_CREATED_EVENT Jan 14 23:57:14.644861 containerd[1695]: time="2026-01-14T23:57:14.644759983Z" level=info msg="container event discarded" container=990ffed14b969f2dbc3ff21c5a94068cc02591f987cb43de94d880554d6260f9 type=CONTAINER_STARTED_EVENT Jan 14 23:57:14.665103 containerd[1695]: time="2026-01-14T23:57:14.664962645Z" level=info msg="container event discarded" container=25a3686f02cbd1284d6dd1b7c58cf3c096d662d0d941bf4e3cec7659dc796efe type=CONTAINER_STARTED_EVENT Jan 14 23:57:14.665103 containerd[1695]: time="2026-01-14T23:57:14.665054485Z" level=info msg="container event discarded" container=c9fffafbe337617857098109cac6f66dbf3fe9d21ab14d120c579b075e3953ae type=CONTAINER_STARTED_EVENT Jan 14 23:57:15.632965 kubelet[2898]: E0114 23:57:15.632890 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5sxpk" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" Jan 14 23:57:15.633376 kubelet[2898]: E0114 23:57:15.633000 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" Jan 14 23:57:17.898969 kubelet[2898]: E0114 23:57:17.898891 2898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" interval="7s" Jan 14 23:57:18.632445 kubelet[2898]: I0114 23:57:18.632389 2898 status_manager.go:890] "Failed to get status for pod" podUID="2600c830ca674ed87b79b96ba000ed32" pod="kube-system/kube-controller-manager-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-controller-manager-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:18.632742 kubelet[2898]: I0114 23:57:18.632691 2898 status_manager.go:890] "Failed to get status for pod" podUID="549af1a4-d10d-41a8-bd81-9ce05836d164" pod="tigera-operator/tigera-operator-7dcd859c48-hg526" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/tigera-operator/pods/tigera-operator-7dcd859c48-hg526\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:18.633006 kubelet[2898]: I0114 23:57:18.632963 2898 status_manager.go:890] "Failed to get status for pod" podUID="300b5f0b-ed7c-4a04-a4b8-68a71ea25297" pod="calico-apiserver/calico-apiserver-5b767987c5-2glxx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-2glxx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:18.634065 kubelet[2898]: I0114 23:57:18.633787 2898 status_manager.go:890] "Failed to get status for pod" podUID="2d307ca4-cd62-4987-b2dc-ed6b76a2794e" pod="calico-system/calico-kube-controllers-7cd9b5689c-544p6" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/calico-kube-controllers-7cd9b5689c-544p6\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:18.634065 kubelet[2898]: I0114 23:57:18.634006 2898 status_manager.go:890] "Failed to get status for pod" podUID="fcec49c5-6358-46d9-9922-8a81fb4bafd8" pod="calico-system/goldmane-666569f655-5sxpk" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/goldmane-666569f655-5sxpk\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:18.634192 kubelet[2898]: I0114 23:57:18.634174 2898 status_manager.go:890] "Failed to get status for pod" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" pod="calico-system/whisker-7f94899ccb-pnwbr" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/whisker-7f94899ccb-pnwbr\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:18.634539 kubelet[2898]: I0114 23:57:18.634397 2898 status_manager.go:890] "Failed to get status for pod" podUID="0b87770b8d26d1b1663c3229f1382cec" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-apiserver-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:18.634928 kubelet[2898]: I0114 23:57:18.634726 2898 status_manager.go:890] "Failed to get status for pod" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-apiserver/pods/calico-apiserver-5b767987c5-49kdx\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:18.635029 kubelet[2898]: I0114 23:57:18.634992 2898 status_manager.go:890] "Failed to get status for pod" podUID="a0fb221571f90a6b03ac373000837dfe" pod="kube-system/kube-scheduler-ci-4515-1-0-n-1d3be4f164" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/kube-system/pods/kube-scheduler-ci-4515-1-0-n-1d3be4f164\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:18.635392 kubelet[2898]: I0114 23:57:18.635361 2898 status_manager.go:890] "Failed to get status for pod" podUID="5c454d6a-8fe3-46dd-a39b-d216b7be481d" pod="calico-system/csi-node-driver-2lqxs" err="Get \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/pods/csi-node-driver-2lqxs\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:18.635758 kubelet[2898]: E0114 23:57:18.635647 2898 event.go:368] "Unable to write event (may retry after sleeping)" err="Patch \"https://10.0.22.230:6443/api/v1/namespaces/calico-system/events/goldmane-666569f655-5sxpk.188abdce32f243bc\": dial tcp 10.0.22.230:6443: connect: connection refused" event="&Event{ObjectMeta:{goldmane-666569f655-5sxpk.188abdce32f243bc calico-system 1959 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:goldmane-666569f655-5sxpk,UID:fcec49c5-6358-46d9-9922-8a81fb4bafd8,APIVersion:v1,ResourceVersion:1054,FieldPath:spec.containers{goldmane},},Reason:Failed,Message:Error: ImagePullBackOff,Source:EventSource{Component:kubelet,Host:ci-4515-1-0-n-1d3be4f164,},FirstTimestamp:2026-01-14 23:48:17 +0000 UTC,LastTimestamp:2026-01-14 23:52:09.632488853 +0000 UTC m=+411.082027407,Count:15,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4515-1-0-n-1d3be4f164,}" Jan 14 23:57:19.886899 kubelet[2898]: I0114 23:57:19.886845 2898 scope.go:117] "RemoveContainer" containerID="b9a5d43fe553419c607d7207a89731fe64e8bf435e8935b77cff33dee5006173" Jan 14 23:57:19.887249 kubelet[2898]: E0114 23:57:19.887002 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver\" with CrashLoopBackOff: \"back-off 1m20s restarting failed container=kube-apiserver pod=kube-apiserver-ci-4515-1-0-n-1d3be4f164_kube-system(0b87770b8d26d1b1663c3229f1382cec)\"" pod="kube-system/kube-apiserver-ci-4515-1-0-n-1d3be4f164" podUID="0b87770b8d26d1b1663c3229f1382cec" Jan 14 23:57:20.632453 kubelet[2898]: E0114 23:57:20.632405 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-5b767987c5-49kdx" podUID="5eca9ff5-ed57-4795-b82c-c2e2b81c8474" Jan 14 23:57:21.032120 kubelet[2898]: E0114 23:57:21.032074 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"NetworkUnavailable\\\"},{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:57:21Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:57:21Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:57:21Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-14T23:57:21Z\\\",\\\"type\\\":\\\"Ready\\\"}]}}\" for node \"ci-4515-1-0-n-1d3be4f164\": Patch \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164/status?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:21.032476 kubelet[2898]: E0114 23:57:21.032287 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:21.032705 kubelet[2898]: E0114 23:57:21.032643 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:21.032911 kubelet[2898]: E0114 23:57:21.032878 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:21.033067 kubelet[2898]: E0114 23:57:21.033051 2898 kubelet_node_status.go:548] "Error updating node status, will retry" err="error getting node \"ci-4515-1-0-n-1d3be4f164\": Get \"https://10.0.22.230:6443/api/v1/nodes/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" Jan 14 23:57:21.033103 kubelet[2898]: E0114 23:57:21.033068 2898 kubelet_node_status.go:535] "Unable to update node status" err="update node status exceeds retry count" Jan 14 23:57:21.632279 kubelet[2898]: E0114 23:57:21.632218 2898 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-7f94899ccb-pnwbr" podUID="63f0b6ec-9977-4e0c-b6a6-80408e82ee47" Jan 14 23:57:22.931377 kernel: pcieport 0000:00:01.0: pciehp: Slot(0): Button press: will power off in 5 sec Jan 14 23:57:24.900394 kubelet[2898]: E0114 23:57:24.900216 2898 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.22.230:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4515-1-0-n-1d3be4f164?timeout=10s\": dial tcp 10.0.22.230:6443: connect: connection refused" interval="7s" Jan 14 23:57:25.926156 containerd[1695]: time="2026-01-14T23:57:25.926098540Z" level=info msg="container event discarded" container=25a3686f02cbd1284d6dd1b7c58cf3c096d662d0d941bf4e3cec7659dc796efe type=CONTAINER_STOPPED_EVENT