Jan 16 18:01:57.497015 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 16 18:01:57.497046 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Fri Jan 16 03:04:27 -00 2026 Jan 16 18:01:57.497057 kernel: KASLR enabled Jan 16 18:01:57.497064 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 16 18:01:57.497070 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390b8118 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Jan 16 18:01:57.497076 kernel: random: crng init done Jan 16 18:01:57.497084 kernel: secureboot: Secure boot disabled Jan 16 18:01:57.497090 kernel: ACPI: Early table checksum verification disabled Jan 16 18:01:57.497097 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 16 18:01:57.497105 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 16 18:01:57.497112 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 18:01:57.497118 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 18:01:57.497125 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 18:01:57.497132 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 18:01:57.497142 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 18:01:57.497148 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 18:01:57.497155 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 18:01:57.497162 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 18:01:57.497169 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 18:01:57.497176 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 16 18:01:57.497183 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 16 18:01:57.497190 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 16 18:01:57.497197 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 16 18:01:57.497205 kernel: NODE_DATA(0) allocated [mem 0x13967da00-0x139684fff] Jan 16 18:01:57.497212 kernel: Zone ranges: Jan 16 18:01:57.497219 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 16 18:01:57.497225 kernel: DMA32 empty Jan 16 18:01:57.497232 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 16 18:01:57.497239 kernel: Device empty Jan 16 18:01:57.497246 kernel: Movable zone start for each node Jan 16 18:01:57.497252 kernel: Early memory node ranges Jan 16 18:01:57.497259 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Jan 16 18:01:57.497266 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Jan 16 18:01:57.497273 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Jan 16 18:01:57.497280 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 16 18:01:57.497288 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 16 18:01:57.497295 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 16 18:01:57.497301 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 16 18:01:57.497308 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 16 18:01:57.497315 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 16 18:01:57.497325 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 16 18:01:57.497334 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 16 18:01:57.497341 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Jan 16 18:01:57.497349 kernel: psci: probing for conduit method from ACPI. Jan 16 18:01:57.497356 kernel: psci: PSCIv1.1 detected in firmware. Jan 16 18:01:57.497363 kernel: psci: Using standard PSCI v0.2 function IDs Jan 16 18:01:57.497370 kernel: psci: Trusted OS migration not required Jan 16 18:01:57.497378 kernel: psci: SMC Calling Convention v1.1 Jan 16 18:01:57.497385 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 16 18:01:57.497394 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 16 18:01:57.497402 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 16 18:01:57.497409 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 16 18:01:57.497417 kernel: Detected PIPT I-cache on CPU0 Jan 16 18:01:57.497424 kernel: CPU features: detected: GIC system register CPU interface Jan 16 18:01:57.497432 kernel: CPU features: detected: Spectre-v4 Jan 16 18:01:57.497439 kernel: CPU features: detected: Spectre-BHB Jan 16 18:01:57.497447 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 16 18:01:57.497454 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 16 18:01:57.497462 kernel: CPU features: detected: ARM erratum 1418040 Jan 16 18:01:57.497469 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 16 18:01:57.497478 kernel: alternatives: applying boot alternatives Jan 16 18:01:57.497486 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=924fb3eb04ba1d8edcb66284d30e3342855b0579b62556e7722bcf37e82bda13 Jan 16 18:01:57.497494 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 16 18:01:57.497501 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 16 18:01:57.497508 kernel: Fallback order for Node 0: 0 Jan 16 18:01:57.497516 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1024000 Jan 16 18:01:57.497523 kernel: Policy zone: Normal Jan 16 18:01:57.497530 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 16 18:01:57.497538 kernel: software IO TLB: area num 2. Jan 16 18:01:57.497545 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Jan 16 18:01:57.497554 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 16 18:01:57.497561 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 16 18:01:57.497569 kernel: rcu: RCU event tracing is enabled. Jan 16 18:01:57.497577 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 16 18:01:57.497584 kernel: Trampoline variant of Tasks RCU enabled. Jan 16 18:01:57.497592 kernel: Tracing variant of Tasks RCU enabled. Jan 16 18:01:57.497599 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 16 18:01:57.497606 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 16 18:01:57.497614 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 18:01:57.497621 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 18:01:57.497628 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 16 18:01:57.497637 kernel: GICv3: 256 SPIs implemented Jan 16 18:01:57.497644 kernel: GICv3: 0 Extended SPIs implemented Jan 16 18:01:57.497651 kernel: Root IRQ handler: gic_handle_irq Jan 16 18:01:57.497658 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 16 18:01:57.497666 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 16 18:01:57.497673 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 16 18:01:57.497680 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 16 18:01:57.497688 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100100000 (indirect, esz 8, psz 64K, shr 1) Jan 16 18:01:57.497695 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100110000 (flat, esz 8, psz 64K, shr 1) Jan 16 18:01:57.497703 kernel: GICv3: using LPI property table @0x0000000100120000 Jan 16 18:01:57.497710 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100130000 Jan 16 18:01:57.497719 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 16 18:01:57.497726 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 16 18:01:57.497733 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 16 18:01:57.497741 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 16 18:01:57.497748 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 16 18:01:57.497756 kernel: Console: colour dummy device 80x25 Jan 16 18:01:57.497764 kernel: ACPI: Core revision 20240827 Jan 16 18:01:57.497772 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 16 18:01:57.497780 kernel: pid_max: default: 32768 minimum: 301 Jan 16 18:01:57.497789 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 16 18:01:57.497797 kernel: landlock: Up and running. Jan 16 18:01:57.497805 kernel: SELinux: Initializing. Jan 16 18:01:57.497812 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 16 18:01:57.497820 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 16 18:01:57.497828 kernel: rcu: Hierarchical SRCU implementation. Jan 16 18:01:57.497836 kernel: rcu: Max phase no-delay instances is 400. Jan 16 18:01:57.497844 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 16 18:01:57.497854 kernel: Remapping and enabling EFI services. Jan 16 18:01:57.497861 kernel: smp: Bringing up secondary CPUs ... Jan 16 18:01:57.497869 kernel: Detected PIPT I-cache on CPU1 Jan 16 18:01:57.497876 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 16 18:01:57.497884 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100140000 Jan 16 18:01:57.497904 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 16 18:01:57.497913 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 16 18:01:57.497923 kernel: smp: Brought up 1 node, 2 CPUs Jan 16 18:01:57.497931 kernel: SMP: Total of 2 processors activated. Jan 16 18:01:57.497962 kernel: CPU: All CPU(s) started at EL1 Jan 16 18:01:57.497974 kernel: CPU features: detected: 32-bit EL0 Support Jan 16 18:01:57.497983 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 16 18:01:57.497992 kernel: CPU features: detected: Common not Private translations Jan 16 18:01:57.498000 kernel: CPU features: detected: CRC32 instructions Jan 16 18:01:57.498008 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 16 18:01:57.498018 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 16 18:01:57.498026 kernel: CPU features: detected: LSE atomic instructions Jan 16 18:01:57.498034 kernel: CPU features: detected: Privileged Access Never Jan 16 18:01:57.498042 kernel: CPU features: detected: RAS Extension Support Jan 16 18:01:57.498050 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 16 18:01:57.498059 kernel: alternatives: applying system-wide alternatives Jan 16 18:01:57.498068 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jan 16 18:01:57.498077 kernel: Memory: 3885860K/4096000K available (11200K kernel code, 2458K rwdata, 9092K rodata, 12480K init, 1038K bss, 188660K reserved, 16384K cma-reserved) Jan 16 18:01:57.498085 kernel: devtmpfs: initialized Jan 16 18:01:57.498093 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 16 18:01:57.498102 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 16 18:01:57.498111 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 16 18:01:57.498119 kernel: 0 pages in range for non-PLT usage Jan 16 18:01:57.498128 kernel: 515152 pages in range for PLT usage Jan 16 18:01:57.498136 kernel: pinctrl core: initialized pinctrl subsystem Jan 16 18:01:57.498145 kernel: SMBIOS 3.0.0 present. Jan 16 18:01:57.498153 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 16 18:01:57.498161 kernel: DMI: Memory slots populated: 1/1 Jan 16 18:01:57.498169 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 16 18:01:57.498177 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 16 18:01:57.498187 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 16 18:01:57.498196 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 16 18:01:57.498204 kernel: audit: initializing netlink subsys (disabled) Jan 16 18:01:57.498212 kernel: audit: type=2000 audit(0.012:1): state=initialized audit_enabled=0 res=1 Jan 16 18:01:57.498220 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 16 18:01:57.498228 kernel: cpuidle: using governor menu Jan 16 18:01:57.498237 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 16 18:01:57.498247 kernel: ASID allocator initialised with 32768 entries Jan 16 18:01:57.498255 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 16 18:01:57.498263 kernel: Serial: AMBA PL011 UART driver Jan 16 18:01:57.498271 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 16 18:01:57.498280 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 16 18:01:57.498288 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 16 18:01:57.498296 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 16 18:01:57.498306 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 16 18:01:57.498314 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 16 18:01:57.498322 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 16 18:01:57.498330 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 16 18:01:57.498338 kernel: ACPI: Added _OSI(Module Device) Jan 16 18:01:57.498346 kernel: ACPI: Added _OSI(Processor Device) Jan 16 18:01:57.498355 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 16 18:01:57.498363 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 16 18:01:57.498373 kernel: ACPI: Interpreter enabled Jan 16 18:01:57.498381 kernel: ACPI: Using GIC for interrupt routing Jan 16 18:01:57.498389 kernel: ACPI: MCFG table detected, 1 entries Jan 16 18:01:57.498397 kernel: ACPI: CPU0 has been hot-added Jan 16 18:01:57.498406 kernel: ACPI: CPU1 has been hot-added Jan 16 18:01:57.498414 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 16 18:01:57.498422 kernel: printk: legacy console [ttyAMA0] enabled Jan 16 18:01:57.498431 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 16 18:01:57.498620 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 16 18:01:57.498713 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 16 18:01:57.498800 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 16 18:01:57.498885 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 16 18:01:57.499092 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 16 18:01:57.499115 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 16 18:01:57.499123 kernel: PCI host bridge to bus 0000:00 Jan 16 18:01:57.499225 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 16 18:01:57.499345 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 16 18:01:57.499423 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 16 18:01:57.499495 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 16 18:01:57.499604 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jan 16 18:01:57.499697 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 conventional PCI endpoint Jan 16 18:01:57.499787 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11289000-0x11289fff] Jan 16 18:01:57.499869 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref] Jan 16 18:01:57.500427 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 16 18:01:57.500544 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11288000-0x11288fff] Jan 16 18:01:57.500628 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 16 18:01:57.500708 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Jan 16 18:01:57.500788 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Jan 16 18:01:57.500878 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 16 18:01:57.501049 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11287000-0x11287fff] Jan 16 18:01:57.501143 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 16 18:01:57.501227 kernel: pci 0000:00:02.1: bridge window [mem 0x10e00000-0x10ffffff] Jan 16 18:01:57.501322 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 16 18:01:57.501402 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11286000-0x11286fff] Jan 16 18:01:57.501481 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 16 18:01:57.501562 kernel: pci 0000:00:02.2: bridge window [mem 0x10c00000-0x10dfffff] Jan 16 18:01:57.501640 kernel: pci 0000:00:02.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Jan 16 18:01:57.501727 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 16 18:01:57.501807 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11285000-0x11285fff] Jan 16 18:01:57.501887 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 16 18:01:57.502038 kernel: pci 0000:00:02.3: bridge window [mem 0x10a00000-0x10bfffff] Jan 16 18:01:57.502130 kernel: pci 0000:00:02.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Jan 16 18:01:57.502222 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 16 18:01:57.502303 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11284000-0x11284fff] Jan 16 18:01:57.502382 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 16 18:01:57.502460 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 16 18:01:57.502538 kernel: pci 0000:00:02.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Jan 16 18:01:57.502635 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 16 18:01:57.502719 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11283000-0x11283fff] Jan 16 18:01:57.502798 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 16 18:01:57.502876 kernel: pci 0000:00:02.5: bridge window [mem 0x10600000-0x107fffff] Jan 16 18:01:57.502995 kernel: pci 0000:00:02.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Jan 16 18:01:57.505118 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 16 18:01:57.505327 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11282000-0x11282fff] Jan 16 18:01:57.505434 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 16 18:01:57.505536 kernel: pci 0000:00:02.6: bridge window [mem 0x10400000-0x105fffff] Jan 16 18:01:57.505624 kernel: pci 0000:00:02.6: bridge window [mem 0x8000500000-0x80005fffff 64bit pref] Jan 16 18:01:57.505728 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 16 18:01:57.505808 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11281000-0x11281fff] Jan 16 18:01:57.506328 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 16 18:01:57.506932 kernel: pci 0000:00:02.7: bridge window [mem 0x10200000-0x103fffff] Jan 16 18:01:57.507072 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 16 18:01:57.507159 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11280000-0x11280fff] Jan 16 18:01:57.507241 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 16 18:01:57.507332 kernel: pci 0000:00:03.0: bridge window [mem 0x10000000-0x101fffff] Jan 16 18:01:57.507425 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 conventional PCI endpoint Jan 16 18:01:57.507508 kernel: pci 0000:00:04.0: BAR 0 [io 0x0000-0x0007] Jan 16 18:01:57.507604 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 16 18:01:57.507688 kernel: pci 0000:01:00.0: BAR 1 [mem 0x11000000-0x11000fff] Jan 16 18:01:57.507773 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jan 16 18:01:57.507854 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 16 18:01:57.508804 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 16 18:01:57.509409 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10e00000-0x10e03fff 64bit] Jan 16 18:01:57.509519 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Jan 16 18:01:57.509604 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10c00000-0x10c00fff] Jan 16 18:01:57.509694 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Jan 16 18:01:57.509786 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Jan 16 18:01:57.509868 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Jan 16 18:01:57.510033 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 16 18:01:57.510122 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff] Jan 16 18:01:57.510207 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Jan 16 18:01:57.510302 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Jan 16 18:01:57.510384 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10600000-0x10600fff] Jan 16 18:01:57.510464 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Jan 16 18:01:57.510556 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 16 18:01:57.510639 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10400000-0x10400fff] Jan 16 18:01:57.510748 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000500000-0x8000503fff 64bit pref] Jan 16 18:01:57.512082 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 16 18:01:57.512185 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 16 18:01:57.512271 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 16 18:01:57.512351 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 16 18:01:57.512437 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 16 18:01:57.512531 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 16 18:01:57.512610 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 16 18:01:57.512695 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 16 18:01:57.512777 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 16 18:01:57.512857 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 16 18:01:57.512996 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 16 18:01:57.513083 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 16 18:01:57.513165 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 16 18:01:57.513252 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 16 18:01:57.513335 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 16 18:01:57.513417 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 16 18:01:57.513510 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 16 18:01:57.513600 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 16 18:01:57.513686 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 16 18:01:57.513770 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 16 18:01:57.513852 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 16 18:01:57.517003 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 16 18:01:57.517185 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 16 18:01:57.517274 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 16 18:01:57.517355 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 16 18:01:57.517441 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 16 18:01:57.517521 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 16 18:01:57.517604 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 16 18:01:57.517688 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff]: assigned Jan 16 18:01:57.517768 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Jan 16 18:01:57.517851 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff]: assigned Jan 16 18:01:57.518052 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Jan 16 18:01:57.518151 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff]: assigned Jan 16 18:01:57.518233 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Jan 16 18:01:57.518323 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff]: assigned Jan 16 18:01:57.518403 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Jan 16 18:01:57.518487 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff]: assigned Jan 16 18:01:57.518567 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Jan 16 18:01:57.518650 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Jan 16 18:01:57.518731 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Jan 16 18:01:57.518817 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Jan 16 18:01:57.518912 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Jan 16 18:01:57.522012 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Jan 16 18:01:57.522160 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Jan 16 18:01:57.522246 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff]: assigned Jan 16 18:01:57.522330 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Jan 16 18:01:57.522429 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8001200000-0x8001203fff 64bit pref]: assigned Jan 16 18:01:57.522513 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11200000-0x11200fff]: assigned Jan 16 18:01:57.522597 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11201000-0x11201fff]: assigned Jan 16 18:01:57.522684 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Jan 16 18:01:57.522773 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11202000-0x11202fff]: assigned Jan 16 18:01:57.522858 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Jan 16 18:01:57.523021 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11203000-0x11203fff]: assigned Jan 16 18:01:57.523107 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Jan 16 18:01:57.523192 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11204000-0x11204fff]: assigned Jan 16 18:01:57.523275 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Jan 16 18:01:57.523362 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11205000-0x11205fff]: assigned Jan 16 18:01:57.523448 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Jan 16 18:01:57.523538 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11206000-0x11206fff]: assigned Jan 16 18:01:57.523617 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Jan 16 18:01:57.523701 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11207000-0x11207fff]: assigned Jan 16 18:01:57.523789 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Jan 16 18:01:57.523870 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11208000-0x11208fff]: assigned Jan 16 18:01:57.524796 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Jan 16 18:01:57.525018 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11209000-0x11209fff]: assigned Jan 16 18:01:57.527139 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff]: assigned Jan 16 18:01:57.527260 kernel: pci 0000:00:04.0: BAR 0 [io 0xa000-0xa007]: assigned Jan 16 18:01:57.527355 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Jan 16 18:01:57.527440 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jan 16 18:01:57.527532 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Jan 16 18:01:57.527617 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 16 18:01:57.527701 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 16 18:01:57.527781 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 16 18:01:57.527860 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 16 18:01:57.528030 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Jan 16 18:01:57.528157 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 16 18:01:57.528276 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 16 18:01:57.531402 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 16 18:01:57.531615 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 16 18:01:57.531775 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Jan 16 18:01:57.532042 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Jan 16 18:01:57.532202 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 16 18:01:57.532284 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 16 18:01:57.532393 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 16 18:01:57.532476 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 16 18:01:57.532566 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Jan 16 18:01:57.532654 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 16 18:01:57.532736 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 16 18:01:57.532818 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 16 18:01:57.532909 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 16 18:01:57.533060 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Jan 16 18:01:57.533148 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Jan 16 18:01:57.533240 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 16 18:01:57.533322 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 16 18:01:57.533402 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 16 18:01:57.533482 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 16 18:01:57.533571 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Jan 16 18:01:57.533653 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Jan 16 18:01:57.533737 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 16 18:01:57.533832 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 16 18:01:57.533930 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 16 18:01:57.534042 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 16 18:01:57.534134 kernel: pci 0000:07:00.0: ROM [mem 0x10c00000-0x10c7ffff pref]: assigned Jan 16 18:01:57.534219 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000c00000-0x8000c03fff 64bit pref]: assigned Jan 16 18:01:57.534304 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10c80000-0x10c80fff]: assigned Jan 16 18:01:57.534422 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 16 18:01:57.534513 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 16 18:01:57.534600 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 16 18:01:57.534683 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 16 18:01:57.534772 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 16 18:01:57.534856 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 16 18:01:57.535005 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 16 18:01:57.535097 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 16 18:01:57.535191 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 16 18:01:57.535275 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 16 18:01:57.535358 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 16 18:01:57.535441 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 16 18:01:57.535529 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 16 18:01:57.535605 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 16 18:01:57.535682 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 16 18:01:57.535775 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 16 18:01:57.535858 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 16 18:01:57.535966 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 16 18:01:57.538158 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 16 18:01:57.538278 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 16 18:01:57.538360 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 16 18:01:57.538451 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 16 18:01:57.538548 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 16 18:01:57.538629 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 16 18:01:57.538724 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 16 18:01:57.538809 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 16 18:01:57.538888 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 16 18:01:57.542137 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 16 18:01:57.542261 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 16 18:01:57.542345 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 16 18:01:57.542446 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 16 18:01:57.542527 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 16 18:01:57.542607 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 16 18:01:57.542695 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 16 18:01:57.542777 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 16 18:01:57.542856 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 16 18:01:57.543015 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 16 18:01:57.543104 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 16 18:01:57.543184 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 16 18:01:57.543275 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 16 18:01:57.543354 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 16 18:01:57.543437 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 16 18:01:57.543448 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 16 18:01:57.543457 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 16 18:01:57.543466 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 16 18:01:57.543474 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 16 18:01:57.543482 kernel: iommu: Default domain type: Translated Jan 16 18:01:57.543491 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 16 18:01:57.543501 kernel: efivars: Registered efivars operations Jan 16 18:01:57.543509 kernel: vgaarb: loaded Jan 16 18:01:57.543517 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 16 18:01:57.543526 kernel: VFS: Disk quotas dquot_6.6.0 Jan 16 18:01:57.543535 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 16 18:01:57.543544 kernel: pnp: PnP ACPI init Jan 16 18:01:57.543656 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 16 18:01:57.543671 kernel: pnp: PnP ACPI: found 1 devices Jan 16 18:01:57.543680 kernel: NET: Registered PF_INET protocol family Jan 16 18:01:57.543689 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 16 18:01:57.543698 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 16 18:01:57.543706 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 16 18:01:57.543715 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 16 18:01:57.543723 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 16 18:01:57.543733 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 16 18:01:57.543804 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 16 18:01:57.543813 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 16 18:01:57.543822 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 16 18:01:57.543982 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 16 18:01:57.543998 kernel: PCI: CLS 0 bytes, default 64 Jan 16 18:01:57.544008 kernel: kvm [1]: HYP mode not available Jan 16 18:01:57.544021 kernel: Initialise system trusted keyrings Jan 16 18:01:57.544030 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 16 18:01:57.544039 kernel: Key type asymmetric registered Jan 16 18:01:57.544047 kernel: Asymmetric key parser 'x509' registered Jan 16 18:01:57.544055 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 16 18:01:57.544064 kernel: io scheduler mq-deadline registered Jan 16 18:01:57.544072 kernel: io scheduler kyber registered Jan 16 18:01:57.544082 kernel: io scheduler bfq registered Jan 16 18:01:57.544091 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 16 18:01:57.544192 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 16 18:01:57.544281 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 16 18:01:57.544368 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 18:01:57.544468 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 16 18:01:57.544556 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 16 18:01:57.544643 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 18:01:57.544734 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 16 18:01:57.544827 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 16 18:01:57.544927 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 18:01:57.545140 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 16 18:01:57.545235 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 16 18:01:57.545325 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 18:01:57.545418 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 16 18:01:57.545504 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 16 18:01:57.545587 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 18:01:57.545674 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 16 18:01:57.545759 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 16 18:01:57.545847 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 18:01:57.546094 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 16 18:01:57.546197 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 16 18:01:57.546282 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 18:01:57.546371 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 16 18:01:57.546456 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 16 18:01:57.546540 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 18:01:57.546558 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 16 18:01:57.546648 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 16 18:01:57.546735 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 16 18:01:57.546819 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 18:01:57.546830 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 16 18:01:57.546839 kernel: ACPI: button: Power Button [PWRB] Jan 16 18:01:57.546849 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 16 18:01:57.547019 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 16 18:01:57.547120 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 16 18:01:57.547132 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 16 18:01:57.547141 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 16 18:01:57.547227 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 16 18:01:57.547239 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 16 18:01:57.547253 kernel: thunder_xcv, ver 1.0 Jan 16 18:01:57.547261 kernel: thunder_bgx, ver 1.0 Jan 16 18:01:57.547270 kernel: nicpf, ver 1.0 Jan 16 18:01:57.547278 kernel: nicvf, ver 1.0 Jan 16 18:01:57.547387 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 16 18:01:57.547470 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-16T18:01:56 UTC (1768586516) Jan 16 18:01:57.547482 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 16 18:01:57.547493 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 16 18:01:57.547501 kernel: watchdog: NMI not fully supported Jan 16 18:01:57.547510 kernel: watchdog: Hard watchdog permanently disabled Jan 16 18:01:57.547519 kernel: NET: Registered PF_INET6 protocol family Jan 16 18:01:57.547527 kernel: Segment Routing with IPv6 Jan 16 18:01:57.547535 kernel: In-situ OAM (IOAM) with IPv6 Jan 16 18:01:57.547544 kernel: NET: Registered PF_PACKET protocol family Jan 16 18:01:57.547554 kernel: Key type dns_resolver registered Jan 16 18:01:57.547563 kernel: registered taskstats version 1 Jan 16 18:01:57.547571 kernel: Loading compiled-in X.509 certificates Jan 16 18:01:57.547579 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: 27e3aa638f3535434dc9dbdde4239fca944d5458' Jan 16 18:01:57.547588 kernel: Demotion targets for Node 0: null Jan 16 18:01:57.547596 kernel: Key type .fscrypt registered Jan 16 18:01:57.547604 kernel: Key type fscrypt-provisioning registered Jan 16 18:01:57.547901 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 16 18:01:57.547913 kernel: ima: Allocated hash algorithm: sha1 Jan 16 18:01:57.547922 kernel: ima: No architecture policies found Jan 16 18:01:57.547930 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 16 18:01:57.547938 kernel: clk: Disabling unused clocks Jan 16 18:01:57.547969 kernel: PM: genpd: Disabling unused power domains Jan 16 18:01:57.547978 kernel: Freeing unused kernel memory: 12480K Jan 16 18:01:57.547989 kernel: Run /init as init process Jan 16 18:01:57.547999 kernel: with arguments: Jan 16 18:01:57.548007 kernel: /init Jan 16 18:01:57.548017 kernel: with environment: Jan 16 18:01:57.548025 kernel: HOME=/ Jan 16 18:01:57.548033 kernel: TERM=linux Jan 16 18:01:57.548041 kernel: ACPI: bus type USB registered Jan 16 18:01:57.548050 kernel: usbcore: registered new interface driver usbfs Jan 16 18:01:57.548059 kernel: usbcore: registered new interface driver hub Jan 16 18:01:57.548068 kernel: usbcore: registered new device driver usb Jan 16 18:01:57.548205 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 16 18:01:57.548296 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 16 18:01:57.548383 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 16 18:01:57.548470 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 16 18:01:57.548558 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 16 18:01:57.548644 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 16 18:01:57.548766 kernel: hub 1-0:1.0: USB hub found Jan 16 18:01:57.548860 kernel: hub 1-0:1.0: 4 ports detected Jan 16 18:01:57.549035 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 16 18:01:57.549152 kernel: hub 2-0:1.0: USB hub found Jan 16 18:01:57.549251 kernel: hub 2-0:1.0: 4 ports detected Jan 16 18:01:57.549263 kernel: SCSI subsystem initialized Jan 16 18:01:57.549362 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Jan 16 18:01:57.549967 kernel: scsi host0: Virtio SCSI HBA Jan 16 18:01:57.550097 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 16 18:01:57.550213 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 16 18:01:57.550306 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 16 18:01:57.550399 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 16 18:01:57.550493 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 16 18:01:57.550584 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 16 18:01:57.550679 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 16 18:01:57.550690 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 16 18:01:57.550699 kernel: GPT:25804799 != 80003071 Jan 16 18:01:57.550707 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 16 18:01:57.550716 kernel: GPT:25804799 != 80003071 Jan 16 18:01:57.550724 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 16 18:01:57.550732 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 16 18:01:57.550825 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 16 18:01:57.550932 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 16 18:01:57.551053 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 16 18:01:57.551065 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 16 18:01:57.551156 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 16 18:01:57.551167 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 16 18:01:57.551180 kernel: device-mapper: uevent: version 1.0.3 Jan 16 18:01:57.551188 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 16 18:01:57.551197 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 16 18:01:57.551205 kernel: raid6: neonx8 gen() 15681 MB/s Jan 16 18:01:57.551214 kernel: raid6: neonx4 gen() 14334 MB/s Jan 16 18:01:57.551222 kernel: raid6: neonx2 gen() 13141 MB/s Jan 16 18:01:57.551230 kernel: raid6: neonx1 gen() 10400 MB/s Jan 16 18:01:57.551240 kernel: raid6: int64x8 gen() 6796 MB/s Jan 16 18:01:57.551248 kernel: raid6: int64x4 gen() 7306 MB/s Jan 16 18:01:57.551368 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 16 18:01:57.551382 kernel: raid6: int64x2 gen() 6067 MB/s Jan 16 18:01:57.551390 kernel: raid6: int64x1 gen() 5024 MB/s Jan 16 18:01:57.551399 kernel: raid6: using algorithm neonx8 gen() 15681 MB/s Jan 16 18:01:57.551407 kernel: raid6: .... xor() 11952 MB/s, rmw enabled Jan 16 18:01:57.551418 kernel: raid6: using neon recovery algorithm Jan 16 18:01:57.551426 kernel: xor: measuring software checksum speed Jan 16 18:01:57.551434 kernel: 8regs : 21630 MB/sec Jan 16 18:01:57.551443 kernel: 32regs : 21681 MB/sec Jan 16 18:01:57.551451 kernel: arm64_neon : 28167 MB/sec Jan 16 18:01:57.551460 kernel: xor: using function: arm64_neon (28167 MB/sec) Jan 16 18:01:57.551468 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 16 18:01:57.551479 kernel: BTRFS: device fsid 772c9e2d-7e98-4acf-842c-b5416fff0f38 devid 1 transid 34 /dev/mapper/usr (254:0) scanned by mount (213) Jan 16 18:01:57.551488 kernel: BTRFS info (device dm-0): first mount of filesystem 772c9e2d-7e98-4acf-842c-b5416fff0f38 Jan 16 18:01:57.551496 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 16 18:01:57.551505 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 16 18:01:57.551513 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 16 18:01:57.551521 kernel: BTRFS info (device dm-0): enabling free space tree Jan 16 18:01:57.551530 kernel: loop: module loaded Jan 16 18:01:57.551539 kernel: loop0: detected capacity change from 0 to 91832 Jan 16 18:01:57.551547 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 16 18:01:57.551651 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 16 18:01:57.551664 systemd[1]: Successfully made /usr/ read-only. Jan 16 18:01:57.551676 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 16 18:01:57.551687 systemd[1]: Detected virtualization kvm. Jan 16 18:01:57.551696 systemd[1]: Detected architecture arm64. Jan 16 18:01:57.551704 systemd[1]: Running in initrd. Jan 16 18:01:57.551713 systemd[1]: No hostname configured, using default hostname. Jan 16 18:01:57.551722 systemd[1]: Hostname set to . Jan 16 18:01:57.551731 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 16 18:01:57.551739 systemd[1]: Queued start job for default target initrd.target. Jan 16 18:01:57.551750 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 16 18:01:57.551760 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 18:01:57.551768 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 18:01:57.551778 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 16 18:01:57.551788 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 18:01:57.551798 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 16 18:01:57.551809 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 16 18:01:57.551818 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 18:01:57.551826 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 18:01:57.551836 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 16 18:01:57.551845 systemd[1]: Reached target paths.target - Path Units. Jan 16 18:01:57.551854 systemd[1]: Reached target slices.target - Slice Units. Jan 16 18:01:57.551864 systemd[1]: Reached target swap.target - Swaps. Jan 16 18:01:57.551873 systemd[1]: Reached target timers.target - Timer Units. Jan 16 18:01:57.551882 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 18:01:57.551928 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 18:01:57.551938 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 16 18:01:57.551984 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 16 18:01:57.551994 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 16 18:01:57.552007 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 18:01:57.552016 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 18:01:57.552025 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 18:01:57.552034 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 18:01:57.552043 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 16 18:01:57.552052 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 16 18:01:57.552062 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 18:01:57.552072 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 16 18:01:57.552081 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 16 18:01:57.552091 systemd[1]: Starting systemd-fsck-usr.service... Jan 16 18:01:57.552099 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 18:01:57.552108 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 18:01:57.552119 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 18:01:57.552128 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 16 18:01:57.552137 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 18:01:57.552146 systemd[1]: Finished systemd-fsck-usr.service. Jan 16 18:01:57.552155 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 18:01:57.552199 systemd-journald[350]: Collecting audit messages is enabled. Jan 16 18:01:57.552221 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 16 18:01:57.552230 kernel: Bridge firewalling registered Jan 16 18:01:57.552241 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 18:01:57.552250 kernel: audit: type=1130 audit(1768586517.515:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:57.552259 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 18:01:57.552268 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 18:01:57.552278 kernel: audit: type=1130 audit(1768586517.527:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:57.552287 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 18:01:57.552297 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 18:01:57.552307 systemd-journald[350]: Journal started Jan 16 18:01:57.552327 systemd-journald[350]: Runtime Journal (/run/log/journal/16ed2ceb9360435aa4cd078c36f9c8a1) is 8M, max 76.5M, 68.5M free. Jan 16 18:01:57.515000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:57.527000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:57.512406 systemd-modules-load[351]: Inserted module 'br_netfilter' Jan 16 18:01:57.555357 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 18:01:57.555383 kernel: audit: type=1130 audit(1768586517.552:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:57.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:57.556000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:57.560975 kernel: audit: type=1130 audit(1768586517.556:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:57.561012 kernel: audit: type=1130 audit(1768586517.559:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:57.559000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:57.560116 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 18:01:57.562859 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 18:01:57.565942 kernel: audit: type=1130 audit(1768586517.562:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:57.562000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:57.570047 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 18:01:57.571000 audit: BPF prog-id=6 op=LOAD Jan 16 18:01:57.572642 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 18:01:57.574663 kernel: audit: type=1334 audit(1768586517.571:8): prog-id=6 op=LOAD Jan 16 18:01:57.578137 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 18:01:57.602063 systemd-tmpfiles[376]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 16 18:01:57.607317 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 18:01:57.608000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:57.610994 kernel: audit: type=1130 audit(1768586517.608:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:57.613118 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 18:01:57.612000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:57.617012 kernel: audit: type=1130 audit(1768586517.612:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:57.617332 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 16 18:01:57.638164 systemd-resolved[375]: Positive Trust Anchors: Jan 16 18:01:57.638820 systemd-resolved[375]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 18:01:57.638824 systemd-resolved[375]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 16 18:01:57.647767 dracut-cmdline[390]: dracut-109 Jan 16 18:01:57.647767 dracut-cmdline[390]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=924fb3eb04ba1d8edcb66284d30e3342855b0579b62556e7722bcf37e82bda13 Jan 16 18:01:57.638858 systemd-resolved[375]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 18:01:57.678522 systemd-resolved[375]: Defaulting to hostname 'linux'. Jan 16 18:01:57.680374 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 18:01:57.681262 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 18:01:57.681000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:57.740017 kernel: Loading iSCSI transport class v2.0-870. Jan 16 18:01:57.752999 kernel: iscsi: registered transport (tcp) Jan 16 18:01:57.768996 kernel: iscsi: registered transport (qla4xxx) Jan 16 18:01:57.769083 kernel: QLogic iSCSI HBA Driver Jan 16 18:01:57.796489 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 16 18:01:57.838703 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 18:01:57.840000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:57.843444 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 16 18:01:57.897037 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 16 18:01:57.898000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:57.901941 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 16 18:01:57.904221 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 16 18:01:57.945041 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 16 18:01:57.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:57.946000 audit: BPF prog-id=7 op=LOAD Jan 16 18:01:57.946000 audit: BPF prog-id=8 op=LOAD Jan 16 18:01:57.946868 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 18:01:57.982771 systemd-udevd[619]: Using default interface naming scheme 'v257'. Jan 16 18:01:57.992999 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 18:01:57.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:57.997490 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 16 18:01:58.032184 dracut-pre-trigger[677]: rd.md=0: removing MD RAID activation Jan 16 18:01:58.053243 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 18:01:58.054000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:58.054000 audit: BPF prog-id=9 op=LOAD Jan 16 18:01:58.056733 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 18:01:58.080013 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 18:01:58.079000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:58.082607 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 18:01:58.112405 systemd-networkd[747]: lo: Link UP Jan 16 18:01:58.112000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:58.112413 systemd-networkd[747]: lo: Gained carrier Jan 16 18:01:58.113107 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 18:01:58.113769 systemd[1]: Reached target network.target - Network. Jan 16 18:01:58.159501 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 18:01:58.161000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:58.165424 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 16 18:01:58.333026 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 16 18:01:58.343934 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 16 18:01:58.363912 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 16 18:01:58.371979 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 16 18:01:58.374986 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 16 18:01:58.380453 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 18:01:58.381279 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 18:01:58.384291 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 16 18:01:58.382000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:58.383107 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 18:01:58.387435 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 18:01:58.399139 systemd-networkd[747]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 18:01:58.399154 systemd-networkd[747]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 18:01:58.400772 systemd-networkd[747]: eth0: Link UP Jan 16 18:01:58.401760 systemd-networkd[747]: eth0: Gained carrier Jan 16 18:01:58.401778 systemd-networkd[747]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 18:01:58.413627 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 16 18:01:58.413721 systemd-networkd[747]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 18:01:58.413725 systemd-networkd[747]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 18:01:58.415220 systemd-networkd[747]: eth1: Link UP Jan 16 18:01:58.415396 systemd-networkd[747]: eth1: Gained carrier Jan 16 18:01:58.415411 systemd-networkd[747]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 18:01:58.427140 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 16 18:01:58.434971 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 16 18:01:58.435079 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 18:01:58.434000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:58.438015 kernel: usbcore: registered new interface driver usbhid Jan 16 18:01:58.438057 kernel: usbhid: USB HID core driver Jan 16 18:01:58.448028 systemd-networkd[747]: eth0: DHCPv4 address 188.245.199.112/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 16 18:01:58.457999 disk-uuid[811]: Primary Header is updated. Jan 16 18:01:58.457999 disk-uuid[811]: Secondary Entries is updated. Jan 16 18:01:58.457999 disk-uuid[811]: Secondary Header is updated. Jan 16 18:01:58.459781 systemd-networkd[747]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 16 18:01:58.473519 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 16 18:01:58.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:58.483563 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 18:01:58.487242 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 18:01:58.488432 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 18:01:58.494709 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 16 18:01:58.533225 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 16 18:01:58.534000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:59.506533 disk-uuid[813]: Warning: The kernel is still using the old partition table. Jan 16 18:01:59.506533 disk-uuid[813]: The new table will be used at the next reboot or after you Jan 16 18:01:59.506533 disk-uuid[813]: run partprobe(8) or kpartx(8) Jan 16 18:01:59.506533 disk-uuid[813]: The operation has completed successfully. Jan 16 18:01:59.517241 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 16 18:01:59.518226 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 16 18:01:59.519000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:59.519000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:59.521566 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 16 18:01:59.569003 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (839) Jan 16 18:01:59.571661 kernel: BTRFS info (device sda6): first mount of filesystem 5e96ab2e-f088-4ca2-ba97-55451a1893dc Jan 16 18:01:59.572335 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 16 18:01:59.576007 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 16 18:01:59.576102 kernel: BTRFS info (device sda6): turning on async discard Jan 16 18:01:59.576118 kernel: BTRFS info (device sda6): enabling free space tree Jan 16 18:01:59.583974 kernel: BTRFS info (device sda6): last unmount of filesystem 5e96ab2e-f088-4ca2-ba97-55451a1893dc Jan 16 18:01:59.585465 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 16 18:01:59.586000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:59.588090 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 16 18:01:59.729610 ignition[858]: Ignition 2.24.0 Jan 16 18:01:59.729621 ignition[858]: Stage: fetch-offline Jan 16 18:01:59.729681 ignition[858]: no configs at "/usr/lib/ignition/base.d" Jan 16 18:01:59.729691 ignition[858]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 18:01:59.730830 ignition[858]: parsed url from cmdline: "" Jan 16 18:01:59.730841 ignition[858]: no config URL provided Jan 16 18:01:59.730856 ignition[858]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 18:01:59.733865 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 18:01:59.730883 ignition[858]: no config at "/usr/lib/ignition/user.ign" Jan 16 18:01:59.730936 ignition[858]: failed to fetch config: resource requires networking Jan 16 18:01:59.735000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:59.731594 ignition[858]: Ignition finished successfully Jan 16 18:01:59.737152 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 16 18:01:59.766328 ignition[865]: Ignition 2.24.0 Jan 16 18:01:59.766348 ignition[865]: Stage: fetch Jan 16 18:01:59.766522 ignition[865]: no configs at "/usr/lib/ignition/base.d" Jan 16 18:01:59.766530 ignition[865]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 18:01:59.766617 ignition[865]: parsed url from cmdline: "" Jan 16 18:01:59.766621 ignition[865]: no config URL provided Jan 16 18:01:59.766625 ignition[865]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 18:01:59.766631 ignition[865]: no config at "/usr/lib/ignition/user.ign" Jan 16 18:01:59.766662 ignition[865]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 16 18:01:59.770783 ignition[865]: GET result: OK Jan 16 18:01:59.771215 ignition[865]: parsing config with SHA512: 224be7a67409b093e1c15dde59514e93ecd897e24edd6367df893fddae7f5e89b720ef6520760301249222767381606eefb905291a3c214514a382c888b48816 Jan 16 18:01:59.778251 unknown[865]: fetched base config from "system" Jan 16 18:01:59.778262 unknown[865]: fetched base config from "system" Jan 16 18:01:59.778268 unknown[865]: fetched user config from "hetzner" Jan 16 18:01:59.779853 ignition[865]: fetch: fetch complete Jan 16 18:01:59.779861 ignition[865]: fetch: fetch passed Jan 16 18:01:59.780024 ignition[865]: Ignition finished successfully Jan 16 18:01:59.783327 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 16 18:01:59.784000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:59.789117 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 16 18:01:59.827972 ignition[872]: Ignition 2.24.0 Jan 16 18:01:59.827988 ignition[872]: Stage: kargs Jan 16 18:01:59.828164 ignition[872]: no configs at "/usr/lib/ignition/base.d" Jan 16 18:01:59.828173 ignition[872]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 18:01:59.829050 ignition[872]: kargs: kargs passed Jan 16 18:01:59.829109 ignition[872]: Ignition finished successfully Jan 16 18:01:59.832055 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 16 18:01:59.833000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:59.835416 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 16 18:01:59.864751 ignition[878]: Ignition 2.24.0 Jan 16 18:01:59.864769 ignition[878]: Stage: disks Jan 16 18:01:59.865009 ignition[878]: no configs at "/usr/lib/ignition/base.d" Jan 16 18:01:59.865020 ignition[878]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 18:01:59.867985 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 16 18:01:59.868000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:59.866006 ignition[878]: disks: disks passed Jan 16 18:01:59.869337 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 16 18:01:59.866065 ignition[878]: Ignition finished successfully Jan 16 18:01:59.870740 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 16 18:01:59.872031 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 18:01:59.873198 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 18:01:59.874021 systemd[1]: Reached target basic.target - Basic System. Jan 16 18:01:59.876690 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 16 18:01:59.937156 systemd-fsck[886]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 16 18:01:59.943742 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 16 18:01:59.945000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:59.947340 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 16 18:01:59.957129 systemd-networkd[747]: eth1: Gained IPv6LL Jan 16 18:02:00.032972 kernel: EXT4-fs (sda9): mounted filesystem 3360ad79-d1e3-4f32-ae7d-4a8c0a3c719d r/w with ordered data mode. Quota mode: none. Jan 16 18:02:00.034292 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 16 18:02:00.036670 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 16 18:02:00.040757 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 18:02:00.042857 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 16 18:02:00.046722 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 16 18:02:00.047483 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 16 18:02:00.047519 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 18:02:00.067324 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 16 18:02:00.072417 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 16 18:02:00.087020 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (894) Jan 16 18:02:00.091117 kernel: BTRFS info (device sda6): first mount of filesystem 5e96ab2e-f088-4ca2-ba97-55451a1893dc Jan 16 18:02:00.091171 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 16 18:02:00.101253 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 16 18:02:00.103130 kernel: BTRFS info (device sda6): turning on async discard Jan 16 18:02:00.103179 kernel: BTRFS info (device sda6): enabling free space tree Jan 16 18:02:00.104582 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 18:02:00.128294 coreos-metadata[896]: Jan 16 18:02:00.128 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 16 18:02:00.128294 coreos-metadata[896]: Jan 16 18:02:00.128 INFO Fetch successful Jan 16 18:02:00.131612 coreos-metadata[896]: Jan 16 18:02:00.130 INFO wrote hostname ci-4580-0-0-p-f44e0c3b96 to /sysroot/etc/hostname Jan 16 18:02:00.135309 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 18:02:00.138000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:00.254160 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 16 18:02:00.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:00.257423 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 16 18:02:00.260412 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 16 18:02:00.282977 kernel: BTRFS info (device sda6): last unmount of filesystem 5e96ab2e-f088-4ca2-ba97-55451a1893dc Jan 16 18:02:00.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:00.310111 kernel: kauditd_printk_skb: 26 callbacks suppressed Jan 16 18:02:00.310142 kernel: audit: type=1130 audit(1768586520.308:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:00.309038 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 16 18:02:00.317877 ignition[997]: INFO : Ignition 2.24.0 Jan 16 18:02:00.317877 ignition[997]: INFO : Stage: mount Jan 16 18:02:00.321860 ignition[997]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 18:02:00.321860 ignition[997]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 18:02:00.321860 ignition[997]: INFO : mount: mount passed Jan 16 18:02:00.321860 ignition[997]: INFO : Ignition finished successfully Jan 16 18:02:00.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:00.323294 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 16 18:02:00.329982 kernel: audit: type=1130 audit(1768586520.325:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:00.330091 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 16 18:02:00.469392 systemd-networkd[747]: eth0: Gained IPv6LL Jan 16 18:02:00.555294 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 16 18:02:00.558820 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 18:02:00.589445 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1008) Jan 16 18:02:00.591289 kernel: BTRFS info (device sda6): first mount of filesystem 5e96ab2e-f088-4ca2-ba97-55451a1893dc Jan 16 18:02:00.591353 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 16 18:02:00.596536 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 16 18:02:00.596622 kernel: BTRFS info (device sda6): turning on async discard Jan 16 18:02:00.596635 kernel: BTRFS info (device sda6): enabling free space tree Jan 16 18:02:00.598351 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 18:02:00.627697 ignition[1025]: INFO : Ignition 2.24.0 Jan 16 18:02:00.628468 ignition[1025]: INFO : Stage: files Jan 16 18:02:00.629051 ignition[1025]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 18:02:00.630479 ignition[1025]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 18:02:00.630479 ignition[1025]: DEBUG : files: compiled without relabeling support, skipping Jan 16 18:02:00.631983 ignition[1025]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 16 18:02:00.631983 ignition[1025]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 16 18:02:00.638031 ignition[1025]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 16 18:02:00.639518 ignition[1025]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 16 18:02:00.640836 unknown[1025]: wrote ssh authorized keys file for user: core Jan 16 18:02:00.642238 ignition[1025]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 16 18:02:00.645997 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 16 18:02:00.645997 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 16 18:02:00.760995 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 16 18:02:00.856657 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 16 18:02:00.856657 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 16 18:02:00.856657 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 16 18:02:00.856657 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 16 18:02:00.856657 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 16 18:02:00.856657 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 18:02:00.856657 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 18:02:00.856657 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 18:02:00.856657 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 18:02:00.867806 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 18:02:00.867806 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 18:02:00.867806 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 16 18:02:00.867806 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 16 18:02:00.867806 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 16 18:02:00.867806 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jan 16 18:02:01.158846 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 16 18:02:01.766598 ignition[1025]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 16 18:02:01.766598 ignition[1025]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 16 18:02:01.771053 ignition[1025]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 18:02:01.774497 ignition[1025]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 18:02:01.774497 ignition[1025]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 16 18:02:01.774497 ignition[1025]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 16 18:02:01.774497 ignition[1025]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 16 18:02:01.774497 ignition[1025]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 16 18:02:01.774497 ignition[1025]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 16 18:02:01.774497 ignition[1025]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 16 18:02:01.774497 ignition[1025]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 16 18:02:01.774497 ignition[1025]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 16 18:02:01.774497 ignition[1025]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 16 18:02:01.774497 ignition[1025]: INFO : files: files passed Jan 16 18:02:01.774497 ignition[1025]: INFO : Ignition finished successfully Jan 16 18:02:01.789662 kernel: audit: type=1130 audit(1768586521.781:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:01.781000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:01.779081 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 16 18:02:01.785166 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 16 18:02:01.789238 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 16 18:02:01.803550 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 16 18:02:01.804523 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 16 18:02:01.808881 kernel: audit: type=1130 audit(1768586521.805:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:01.808931 kernel: audit: type=1131 audit(1768586521.805:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:01.805000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:01.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:01.815022 initrd-setup-root-after-ignition[1056]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 18:02:01.816540 initrd-setup-root-after-ignition[1056]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 16 18:02:01.817586 initrd-setup-root-after-ignition[1060]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 18:02:01.820961 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 18:02:01.823000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:01.824394 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 16 18:02:01.827199 kernel: audit: type=1130 audit(1768586521.823:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:01.828500 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 16 18:02:01.889873 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 16 18:02:01.890721 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 16 18:02:01.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:01.892826 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 16 18:02:01.895406 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 16 18:02:01.898686 kernel: audit: type=1130 audit(1768586521.890:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:01.898722 kernel: audit: type=1131 audit(1768586521.890:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:01.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:01.898668 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 16 18:02:01.900405 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 16 18:02:01.932463 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 18:02:01.932000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:01.938003 kernel: audit: type=1130 audit(1768586521.932:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:01.938285 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 16 18:02:01.956266 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 16 18:02:01.956414 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 16 18:02:01.957209 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 18:02:01.959259 systemd[1]: Stopped target timers.target - Timer Units. Jan 16 18:02:01.961000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:01.960181 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 16 18:02:01.960326 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 18:02:01.965501 kernel: audit: type=1131 audit(1768586521.961:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:01.961862 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 16 18:02:01.963880 systemd[1]: Stopped target basic.target - Basic System. Jan 16 18:02:01.964981 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 16 18:02:01.966391 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 18:02:01.967840 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 16 18:02:01.969381 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 16 18:02:01.970622 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 16 18:02:01.971620 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 18:02:01.972922 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 16 18:02:01.974024 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 16 18:02:01.975184 systemd[1]: Stopped target swap.target - Swaps. Jan 16 18:02:01.976000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:01.976103 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 16 18:02:01.976238 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 16 18:02:01.977482 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 16 18:02:01.978238 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 18:02:01.981000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:01.979361 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 16 18:02:01.979794 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 18:02:01.982000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:01.980624 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 16 18:02:01.984000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:01.980756 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 16 18:02:01.986000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:01.982560 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 16 18:02:01.982717 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 18:02:01.983775 systemd[1]: ignition-files.service: Deactivated successfully. Jan 16 18:02:01.983873 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 16 18:02:01.985114 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 16 18:02:01.985219 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 18:02:01.987197 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 16 18:02:01.990334 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 16 18:02:01.993462 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 16 18:02:01.993658 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 18:02:01.996827 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 16 18:02:01.996979 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 18:02:01.995000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:01.997000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:01.998834 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 16 18:02:02.000000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:01.998982 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 18:02:02.006305 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 16 18:02:02.008160 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 16 18:02:02.009000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:02.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:02.023128 ignition[1080]: INFO : Ignition 2.24.0 Jan 16 18:02:02.025061 ignition[1080]: INFO : Stage: umount Jan 16 18:02:02.025061 ignition[1080]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 18:02:02.025061 ignition[1080]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 18:02:02.025061 ignition[1080]: INFO : umount: umount passed Jan 16 18:02:02.025061 ignition[1080]: INFO : Ignition finished successfully Jan 16 18:02:02.023827 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 16 18:02:02.028591 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 16 18:02:02.029513 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 16 18:02:02.030000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:02.031572 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 16 18:02:02.033052 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 16 18:02:02.033000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:02.034606 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 16 18:02:02.035414 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 16 18:02:02.035000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:02.036141 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 16 18:02:02.037000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:02.036201 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 16 18:02:02.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:02.037247 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 16 18:02:02.037298 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 16 18:02:02.041000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:02.038328 systemd[1]: Stopped target network.target - Network. Jan 16 18:02:02.039652 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 16 18:02:02.039718 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 18:02:02.041239 systemd[1]: Stopped target paths.target - Path Units. Jan 16 18:02:02.042108 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 16 18:02:02.048076 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 18:02:02.051043 systemd[1]: Stopped target slices.target - Slice Units. Jan 16 18:02:02.053765 systemd[1]: Stopped target sockets.target - Socket Units. Jan 16 18:02:02.055531 systemd[1]: iscsid.socket: Deactivated successfully. Jan 16 18:02:02.055581 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 18:02:02.056527 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 16 18:02:02.056560 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 18:02:02.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:02.057566 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 16 18:02:02.059000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:02.057590 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 16 18:02:02.060000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:02.058558 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 16 18:02:02.058621 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 16 18:02:02.059574 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 16 18:02:02.059618 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 16 18:02:02.060591 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 16 18:02:02.060638 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 16 18:02:02.061699 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 16 18:02:02.062788 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 16 18:02:02.072060 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 16 18:02:02.072000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:02.072227 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 16 18:02:02.076848 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 16 18:02:02.076000 audit: BPF prog-id=6 op=UNLOAD Jan 16 18:02:02.077176 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 16 18:02:02.077000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:02.078000 audit: BPF prog-id=9 op=UNLOAD Jan 16 18:02:02.080181 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 16 18:02:02.080800 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 16 18:02:02.080839 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 16 18:02:02.082880 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 16 18:02:02.085466 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 16 18:02:02.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:02.085555 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 18:02:02.089000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:02.088187 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 16 18:02:02.088256 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 16 18:02:02.091163 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 16 18:02:02.092000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:02.091227 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 16 18:02:02.092629 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 18:02:02.111820 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 16 18:02:02.112038 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 18:02:02.113000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:02.114597 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 16 18:02:02.114671 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 16 18:02:02.117688 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 16 18:02:02.119000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:02.117730 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 18:02:02.118576 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 16 18:02:02.118630 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 16 18:02:02.121413 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 16 18:02:02.121496 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 16 18:02:02.124000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:02.125408 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 18:02:02.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:02.125475 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 18:02:02.129018 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 16 18:02:02.129757 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 16 18:02:02.132000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:02.129842 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 18:02:02.133741 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 16 18:02:02.136000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:02.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:02.133828 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 18:02:02.139000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:02.137250 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 16 18:02:02.141000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:02.137307 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 18:02:02.138486 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 16 18:02:02.138532 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 18:02:02.139766 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 18:02:02.139823 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 18:02:02.141979 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 16 18:02:02.145102 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 16 18:02:02.146000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:02.154327 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 16 18:02:02.154466 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 16 18:02:02.155000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:02.155000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:02.156308 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 16 18:02:02.159437 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 16 18:02:02.184521 systemd[1]: Switching root. Jan 16 18:02:02.226582 systemd-journald[350]: Journal stopped Jan 16 18:02:03.351120 systemd-journald[350]: Received SIGTERM from PID 1 (systemd). Jan 16 18:02:03.351210 kernel: SELinux: policy capability network_peer_controls=1 Jan 16 18:02:03.351233 kernel: SELinux: policy capability open_perms=1 Jan 16 18:02:03.351250 kernel: SELinux: policy capability extended_socket_class=1 Jan 16 18:02:03.351263 kernel: SELinux: policy capability always_check_network=0 Jan 16 18:02:03.351273 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 16 18:02:03.351287 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 16 18:02:03.351297 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 16 18:02:03.351307 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 16 18:02:03.351319 kernel: SELinux: policy capability userspace_initial_context=0 Jan 16 18:02:03.351330 systemd[1]: Successfully loaded SELinux policy in 57.706ms. Jan 16 18:02:03.351351 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.856ms. Jan 16 18:02:03.351363 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 16 18:02:03.351375 systemd[1]: Detected virtualization kvm. Jan 16 18:02:03.351387 systemd[1]: Detected architecture arm64. Jan 16 18:02:03.351398 systemd[1]: Detected first boot. Jan 16 18:02:03.351410 systemd[1]: Hostname set to . Jan 16 18:02:03.351424 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 16 18:02:03.351436 zram_generator::config[1123]: No configuration found. Jan 16 18:02:03.351448 kernel: NET: Registered PF_VSOCK protocol family Jan 16 18:02:03.351459 systemd[1]: Populated /etc with preset unit settings. Jan 16 18:02:03.351471 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 16 18:02:03.351485 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 16 18:02:03.351496 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 16 18:02:03.351508 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 16 18:02:03.351520 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 16 18:02:03.351531 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 16 18:02:03.351543 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 16 18:02:03.351554 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 16 18:02:03.351567 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 16 18:02:03.351580 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 16 18:02:03.351592 systemd[1]: Created slice user.slice - User and Session Slice. Jan 16 18:02:03.351604 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 18:02:03.351615 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 18:02:03.351626 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 16 18:02:03.351637 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 16 18:02:03.351649 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 16 18:02:03.351660 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 18:02:03.351671 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 16 18:02:03.351682 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 18:02:03.351693 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 18:02:03.351706 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 16 18:02:03.351717 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 16 18:02:03.351728 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 16 18:02:03.351739 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 16 18:02:03.351750 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 18:02:03.351766 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 18:02:03.351779 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 16 18:02:03.351792 systemd[1]: Reached target slices.target - Slice Units. Jan 16 18:02:03.351804 systemd[1]: Reached target swap.target - Swaps. Jan 16 18:02:03.351816 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 16 18:02:03.351827 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 16 18:02:03.351838 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 16 18:02:03.351849 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 16 18:02:03.351860 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 16 18:02:03.351873 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 18:02:03.351897 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 16 18:02:03.351910 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 16 18:02:03.351921 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 18:02:03.351933 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 18:02:03.352053 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 16 18:02:03.352074 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 16 18:02:03.352089 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 16 18:02:03.352101 systemd[1]: Mounting media.mount - External Media Directory... Jan 16 18:02:03.352112 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 16 18:02:03.352124 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 16 18:02:03.352135 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 16 18:02:03.352147 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 16 18:02:03.352158 systemd[1]: Reached target machines.target - Containers. Jan 16 18:02:03.352171 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 16 18:02:03.352183 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 18:02:03.352195 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 18:02:03.352206 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 16 18:02:03.352217 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 18:02:03.352229 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 18:02:03.352241 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 18:02:03.352253 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 16 18:02:03.352264 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 18:02:03.352275 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 16 18:02:03.352286 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 16 18:02:03.352299 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 16 18:02:03.352312 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 16 18:02:03.352323 systemd[1]: Stopped systemd-fsck-usr.service. Jan 16 18:02:03.352335 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 16 18:02:03.352347 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 18:02:03.352361 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 18:02:03.352373 kernel: fuse: init (API version 7.41) Jan 16 18:02:03.352384 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 16 18:02:03.352395 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 16 18:02:03.352407 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 16 18:02:03.352418 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 18:02:03.352430 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 16 18:02:03.352441 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 16 18:02:03.352454 systemd[1]: Mounted media.mount - External Media Directory. Jan 16 18:02:03.352465 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 16 18:02:03.352477 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 16 18:02:03.352489 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 16 18:02:03.352502 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 18:02:03.352513 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 16 18:02:03.352524 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 16 18:02:03.352536 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 18:02:03.352546 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 18:02:03.352558 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 18:02:03.352570 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 18:02:03.352583 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 16 18:02:03.352594 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 16 18:02:03.352605 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 18:02:03.352616 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 18:02:03.352628 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 18:02:03.352639 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 16 18:02:03.352651 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 16 18:02:03.352663 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 16 18:02:03.352674 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 16 18:02:03.352686 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 16 18:02:03.352698 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 18:02:03.352709 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 16 18:02:03.352720 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 18:02:03.352732 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 16 18:02:03.352745 kernel: ACPI: bus type drm_connector registered Jan 16 18:02:03.352757 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 16 18:02:03.352769 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 18:02:03.352780 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 16 18:02:03.352792 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 18:02:03.352804 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 18:02:03.352816 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 16 18:02:03.352829 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 18:02:03.352840 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 18:02:03.352896 systemd-journald[1187]: Collecting audit messages is enabled. Jan 16 18:02:03.352924 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 18:02:03.352936 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 18:02:03.355400 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 16 18:02:03.355438 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 16 18:02:03.355450 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 16 18:02:03.355462 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 16 18:02:03.355477 systemd-journald[1187]: Journal started Jan 16 18:02:03.355507 systemd-journald[1187]: Runtime Journal (/run/log/journal/16ed2ceb9360435aa4cd078c36f9c8a1) is 8M, max 76.5M, 68.5M free. Jan 16 18:02:03.169000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:03.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:03.176000 audit: BPF prog-id=14 op=UNLOAD Jan 16 18:02:03.176000 audit: BPF prog-id=13 op=UNLOAD Jan 16 18:02:03.363248 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 18:02:03.178000 audit: BPF prog-id=15 op=LOAD Jan 16 18:02:03.178000 audit: BPF prog-id=16 op=LOAD Jan 16 18:02:03.178000 audit: BPF prog-id=17 op=LOAD Jan 16 18:02:03.242000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:03.245000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:03.245000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:03.248000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:03.248000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:03.252000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:03.252000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:03.255000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:03.255000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:03.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:03.258000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:03.262000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:03.264000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:03.328000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 16 18:02:03.328000 audit[1187]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffea4bb780 a2=4000 a3=0 items=0 ppid=1 pid=1187 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:03.328000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 16 18:02:03.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:03.335000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:03.337000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:03.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:03.360000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:02.975317 systemd[1]: Queued start job for default target multi-user.target. Jan 16 18:02:03.000566 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 16 18:02:03.001249 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 16 18:02:03.366208 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 16 18:02:03.369000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:03.370050 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 18:02:03.397128 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 16 18:02:03.398000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:03.398489 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 16 18:02:03.400983 kernel: loop1: detected capacity change from 0 to 8 Jan 16 18:02:03.404367 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 16 18:02:03.417226 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Jan 16 18:02:03.417534 systemd-tmpfiles[1214]: ACLs are not supported, ignoring. Jan 16 18:02:03.425200 systemd-journald[1187]: Time spent on flushing to /var/log/journal/16ed2ceb9360435aa4cd078c36f9c8a1 is 46.748ms for 1300 entries. Jan 16 18:02:03.425200 systemd-journald[1187]: System Journal (/var/log/journal/16ed2ceb9360435aa4cd078c36f9c8a1) is 8M, max 588.1M, 580.1M free. Jan 16 18:02:03.482463 systemd-journald[1187]: Received client request to flush runtime journal. Jan 16 18:02:03.483162 kernel: loop2: detected capacity change from 0 to 45344 Jan 16 18:02:03.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:03.446000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:03.454000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:03.433223 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 18:02:03.445023 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 16 18:02:03.452341 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 16 18:02:03.454027 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 18:02:03.487213 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 16 18:02:03.487000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:03.496008 kernel: loop3: detected capacity change from 0 to 211168 Jan 16 18:02:03.502048 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 16 18:02:03.502000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:03.518000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:03.519151 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 16 18:02:03.520000 audit: BPF prog-id=18 op=LOAD Jan 16 18:02:03.520000 audit: BPF prog-id=19 op=LOAD Jan 16 18:02:03.520000 audit: BPF prog-id=20 op=LOAD Jan 16 18:02:03.524000 audit: BPF prog-id=21 op=LOAD Jan 16 18:02:03.524150 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 16 18:02:03.528633 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 18:02:03.530722 kernel: loop4: detected capacity change from 0 to 100192 Jan 16 18:02:03.533980 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 18:02:03.538000 audit: BPF prog-id=22 op=LOAD Jan 16 18:02:03.540000 audit: BPF prog-id=23 op=LOAD Jan 16 18:02:03.540000 audit: BPF prog-id=24 op=LOAD Jan 16 18:02:03.542259 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 16 18:02:03.544000 audit: BPF prog-id=25 op=LOAD Jan 16 18:02:03.545000 audit: BPF prog-id=26 op=LOAD Jan 16 18:02:03.545000 audit: BPF prog-id=27 op=LOAD Jan 16 18:02:03.547374 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 16 18:02:03.562970 kernel: loop5: detected capacity change from 0 to 8 Jan 16 18:02:03.568978 kernel: loop6: detected capacity change from 0 to 45344 Jan 16 18:02:03.583983 kernel: loop7: detected capacity change from 0 to 211168 Jan 16 18:02:03.602972 kernel: loop1: detected capacity change from 0 to 100192 Jan 16 18:02:03.602792 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Jan 16 18:02:03.602807 systemd-tmpfiles[1269]: ACLs are not supported, ignoring. Jan 16 18:02:03.611651 (sd-merge)[1273]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-hetzner.raw'. Jan 16 18:02:03.616000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:03.617146 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 18:02:03.619199 (sd-merge)[1273]: Merged extensions into '/usr'. Jan 16 18:02:03.627170 systemd[1]: Reload requested from client PID 1213 ('systemd-sysext') (unit systemd-sysext.service)... Jan 16 18:02:03.627190 systemd[1]: Reloading... Jan 16 18:02:03.645269 systemd-nsresourced[1270]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 16 18:02:03.747975 zram_generator::config[1316]: No configuration found. Jan 16 18:02:03.838750 systemd-oomd[1267]: No swap; memory pressure usage will be degraded Jan 16 18:02:03.842353 systemd-resolved[1268]: Positive Trust Anchors: Jan 16 18:02:03.842365 systemd-resolved[1268]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 18:02:03.842368 systemd-resolved[1268]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 16 18:02:03.842399 systemd-resolved[1268]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 18:02:03.850730 systemd-resolved[1268]: Using system hostname 'ci-4580-0-0-p-f44e0c3b96'. Jan 16 18:02:03.966607 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 16 18:02:03.967302 systemd[1]: Reloading finished in 339 ms. Jan 16 18:02:03.987540 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 16 18:02:03.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:03.991000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:03.991239 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 16 18:02:03.992376 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 16 18:02:03.992000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:03.993783 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 18:02:03.994000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:03.997180 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 16 18:02:03.998000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:04.001566 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 18:02:04.011198 systemd[1]: Starting ensure-sysext.service... Jan 16 18:02:04.013000 audit: BPF prog-id=28 op=LOAD Jan 16 18:02:04.013000 audit: BPF prog-id=21 op=UNLOAD Jan 16 18:02:04.015000 audit: BPF prog-id=29 op=LOAD Jan 16 18:02:04.015000 audit: BPF prog-id=25 op=UNLOAD Jan 16 18:02:04.015000 audit: BPF prog-id=30 op=LOAD Jan 16 18:02:04.015000 audit: BPF prog-id=31 op=LOAD Jan 16 18:02:04.015000 audit: BPF prog-id=26 op=UNLOAD Jan 16 18:02:04.015000 audit: BPF prog-id=27 op=UNLOAD Jan 16 18:02:04.016000 audit: BPF prog-id=32 op=LOAD Jan 16 18:02:04.016000 audit: BPF prog-id=18 op=UNLOAD Jan 16 18:02:04.016000 audit: BPF prog-id=33 op=LOAD Jan 16 18:02:04.016000 audit: BPF prog-id=34 op=LOAD Jan 16 18:02:04.016000 audit: BPF prog-id=19 op=UNLOAD Jan 16 18:02:04.016000 audit: BPF prog-id=20 op=UNLOAD Jan 16 18:02:04.013223 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 18:02:04.017000 audit: BPF prog-id=35 op=LOAD Jan 16 18:02:04.018000 audit: BPF prog-id=15 op=UNLOAD Jan 16 18:02:04.018000 audit: BPF prog-id=36 op=LOAD Jan 16 18:02:04.018000 audit: BPF prog-id=37 op=LOAD Jan 16 18:02:04.018000 audit: BPF prog-id=16 op=UNLOAD Jan 16 18:02:04.018000 audit: BPF prog-id=17 op=UNLOAD Jan 16 18:02:04.020000 audit: BPF prog-id=38 op=LOAD Jan 16 18:02:04.020000 audit: BPF prog-id=22 op=UNLOAD Jan 16 18:02:04.020000 audit: BPF prog-id=39 op=LOAD Jan 16 18:02:04.020000 audit: BPF prog-id=40 op=LOAD Jan 16 18:02:04.020000 audit: BPF prog-id=23 op=UNLOAD Jan 16 18:02:04.020000 audit: BPF prog-id=24 op=UNLOAD Jan 16 18:02:04.055198 systemd[1]: Reload requested from client PID 1353 ('systemctl') (unit ensure-sysext.service)... Jan 16 18:02:04.055221 systemd[1]: Reloading... Jan 16 18:02:04.059468 systemd-tmpfiles[1354]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 16 18:02:04.060509 systemd-tmpfiles[1354]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 16 18:02:04.060994 systemd-tmpfiles[1354]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 16 18:02:04.062236 systemd-tmpfiles[1354]: ACLs are not supported, ignoring. Jan 16 18:02:04.062402 systemd-tmpfiles[1354]: ACLs are not supported, ignoring. Jan 16 18:02:04.072119 systemd-tmpfiles[1354]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 18:02:04.072319 systemd-tmpfiles[1354]: Skipping /boot Jan 16 18:02:04.080403 systemd-tmpfiles[1354]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 18:02:04.080532 systemd-tmpfiles[1354]: Skipping /boot Jan 16 18:02:04.159975 zram_generator::config[1386]: No configuration found. Jan 16 18:02:04.332588 systemd[1]: Reloading finished in 277 ms. Jan 16 18:02:04.360021 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 16 18:02:04.359000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:04.361000 audit: BPF prog-id=41 op=LOAD Jan 16 18:02:04.362000 audit: BPF prog-id=32 op=UNLOAD Jan 16 18:02:04.362000 audit: BPF prog-id=42 op=LOAD Jan 16 18:02:04.362000 audit: BPF prog-id=43 op=LOAD Jan 16 18:02:04.362000 audit: BPF prog-id=33 op=UNLOAD Jan 16 18:02:04.362000 audit: BPF prog-id=34 op=UNLOAD Jan 16 18:02:04.363000 audit: BPF prog-id=44 op=LOAD Jan 16 18:02:04.363000 audit: BPF prog-id=29 op=UNLOAD Jan 16 18:02:04.363000 audit: BPF prog-id=45 op=LOAD Jan 16 18:02:04.363000 audit: BPF prog-id=46 op=LOAD Jan 16 18:02:04.363000 audit: BPF prog-id=30 op=UNLOAD Jan 16 18:02:04.363000 audit: BPF prog-id=31 op=UNLOAD Jan 16 18:02:04.364000 audit: BPF prog-id=47 op=LOAD Jan 16 18:02:04.365000 audit: BPF prog-id=35 op=UNLOAD Jan 16 18:02:04.365000 audit: BPF prog-id=48 op=LOAD Jan 16 18:02:04.365000 audit: BPF prog-id=49 op=LOAD Jan 16 18:02:04.365000 audit: BPF prog-id=36 op=UNLOAD Jan 16 18:02:04.365000 audit: BPF prog-id=37 op=UNLOAD Jan 16 18:02:04.366000 audit: BPF prog-id=50 op=LOAD Jan 16 18:02:04.366000 audit: BPF prog-id=28 op=UNLOAD Jan 16 18:02:04.366000 audit: BPF prog-id=51 op=LOAD Jan 16 18:02:04.366000 audit: BPF prog-id=38 op=UNLOAD Jan 16 18:02:04.366000 audit: BPF prog-id=52 op=LOAD Jan 16 18:02:04.366000 audit: BPF prog-id=53 op=LOAD Jan 16 18:02:04.366000 audit: BPF prog-id=39 op=UNLOAD Jan 16 18:02:04.366000 audit: BPF prog-id=40 op=UNLOAD Jan 16 18:02:04.369875 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 18:02:04.370000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:04.379587 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 16 18:02:04.385294 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 16 18:02:04.390181 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 16 18:02:04.396908 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 16 18:02:04.397000 audit: BPF prog-id=8 op=UNLOAD Jan 16 18:02:04.397000 audit: BPF prog-id=7 op=UNLOAD Jan 16 18:02:04.397000 audit: BPF prog-id=54 op=LOAD Jan 16 18:02:04.399000 audit: BPF prog-id=55 op=LOAD Jan 16 18:02:04.401323 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 18:02:04.406527 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 16 18:02:04.414848 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 18:02:04.418262 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 18:02:04.421860 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 18:02:04.433056 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 18:02:04.433758 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 18:02:04.434023 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 16 18:02:04.434194 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 16 18:02:04.438202 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 18:02:04.438395 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 18:02:04.438536 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 16 18:02:04.438615 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 16 18:02:04.443863 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 18:02:04.456198 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 18:02:04.456921 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 18:02:04.457175 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 16 18:02:04.457269 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 16 18:02:04.458000 audit[1430]: SYSTEM_BOOT pid=1430 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 16 18:02:04.473286 systemd[1]: Finished ensure-sysext.service. Jan 16 18:02:04.474000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:04.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:04.475971 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 16 18:02:04.484000 audit: BPF prog-id=56 op=LOAD Jan 16 18:02:04.490695 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 16 18:02:04.513586 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 18:02:04.513906 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 18:02:04.514000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 16 18:02:04.514000 audit[1458]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffefcc6f00 a2=420 a3=0 items=0 ppid=1425 pid=1458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:04.514000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 16 18:02:04.515847 augenrules[1458]: No rules Jan 16 18:02:04.516866 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 18:02:04.520370 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 18:02:04.523401 systemd[1]: audit-rules.service: Deactivated successfully. Jan 16 18:02:04.525025 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 16 18:02:04.526462 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 16 18:02:04.529510 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 18:02:04.529714 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 18:02:04.531200 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 18:02:04.533236 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 18:02:04.535467 systemd-udevd[1429]: Using default interface naming scheme 'v257'. Jan 16 18:02:04.543166 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 18:02:04.543267 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 18:02:04.581134 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 18:02:04.585164 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 18:02:04.600779 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 16 18:02:04.604678 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 18:02:04.635871 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 16 18:02:04.636852 systemd[1]: Reached target time-set.target - System Time Set. Jan 16 18:02:04.706494 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 16 18:02:04.724700 systemd-networkd[1472]: lo: Link UP Jan 16 18:02:04.725314 systemd-networkd[1472]: lo: Gained carrier Jan 16 18:02:04.728694 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 18:02:04.729744 systemd[1]: Reached target network.target - Network. Jan 16 18:02:04.733077 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 16 18:02:04.739297 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 16 18:02:04.772595 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 16 18:02:04.808108 systemd-networkd[1472]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 18:02:04.808120 systemd-networkd[1472]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 18:02:04.813556 systemd-networkd[1472]: eth0: Link UP Jan 16 18:02:04.814406 systemd-networkd[1472]: eth0: Gained carrier Jan 16 18:02:04.815518 systemd-networkd[1472]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 18:02:04.819590 systemd-networkd[1472]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 18:02:04.819598 systemd-networkd[1472]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 18:02:04.823257 systemd-networkd[1472]: eth1: Link UP Jan 16 18:02:04.825550 systemd-networkd[1472]: eth1: Gained carrier Jan 16 18:02:04.825582 systemd-networkd[1472]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 18:02:04.870218 systemd-networkd[1472]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 16 18:02:04.875055 systemd-networkd[1472]: eth0: DHCPv4 address 188.245.199.112/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 16 18:02:04.876052 systemd-timesyncd[1454]: Network configuration changed, trying to establish connection. Jan 16 18:02:04.877767 systemd-timesyncd[1454]: Network configuration changed, trying to establish connection. Jan 16 18:02:04.878642 systemd-timesyncd[1454]: Network configuration changed, trying to establish connection. Jan 16 18:02:04.942998 kernel: mousedev: PS/2 mouse device common for all mice Jan 16 18:02:05.006859 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 16 18:02:05.007090 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 18:02:05.010150 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 18:02:05.012264 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 18:02:05.018677 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 18:02:05.019489 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 18:02:05.019588 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 16 18:02:05.019635 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 16 18:02:05.019661 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 18:02:05.050328 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 18:02:05.050593 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 18:02:05.055914 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 18:02:05.056259 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 18:02:05.058227 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 18:02:05.058775 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 18:02:05.062109 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 18:02:05.062194 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 18:02:05.069208 ldconfig[1427]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 16 18:02:05.076189 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 16 18:02:05.082642 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 16 18:02:05.092009 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 16 18:02:05.092110 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 16 18:02:05.092125 kernel: [drm] features: -context_init Jan 16 18:02:05.093235 kernel: [drm] number of scanouts: 1 Jan 16 18:02:05.093302 kernel: [drm] number of cap sets: 0 Jan 16 18:02:05.094072 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Jan 16 18:02:05.106980 kernel: Console: switching to colour frame buffer device 160x50 Jan 16 18:02:05.126005 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 16 18:02:05.147072 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 16 18:02:05.148060 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 18:02:05.150408 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 16 18:02:05.151137 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 16 18:02:05.153254 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 16 18:02:05.154725 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 16 18:02:05.156644 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 16 18:02:05.157516 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 16 18:02:05.160115 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 16 18:02:05.161015 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 16 18:02:05.161150 systemd[1]: Reached target paths.target - Path Units. Jan 16 18:02:05.162230 systemd[1]: Reached target timers.target - Timer Units. Jan 16 18:02:05.164638 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 16 18:02:05.167648 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 16 18:02:05.175130 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 16 18:02:05.179099 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 16 18:02:05.182051 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 16 18:02:05.210270 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 16 18:02:05.211434 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 16 18:02:05.215618 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 16 18:02:05.225541 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 16 18:02:05.227417 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 18:02:05.229814 systemd[1]: Reached target basic.target - Basic System. Jan 16 18:02:05.230575 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 16 18:02:05.230595 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 16 18:02:05.236217 systemd[1]: Starting containerd.service - containerd container runtime... Jan 16 18:02:05.243319 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 16 18:02:05.249545 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 16 18:02:05.251311 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 16 18:02:05.254352 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 16 18:02:05.260621 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 16 18:02:05.261253 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 16 18:02:05.266297 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 16 18:02:05.271604 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 16 18:02:05.276647 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 16 18:02:05.287752 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 16 18:02:05.293178 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 16 18:02:05.303457 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 16 18:02:05.313877 jq[1549]: false Jan 16 18:02:05.314245 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 16 18:02:05.315994 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 16 18:02:05.317097 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 16 18:02:05.324252 systemd[1]: Starting update-engine.service - Update Engine... Jan 16 18:02:05.332965 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 16 18:02:05.344127 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 16 18:02:05.345580 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 16 18:02:05.345857 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 16 18:02:05.373418 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 18:02:05.379007 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 16 18:02:05.379262 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 16 18:02:05.391147 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 16 18:02:05.400608 extend-filesystems[1550]: Found /dev/sda6 Jan 16 18:02:05.418000 jq[1566]: true Jan 16 18:02:05.422811 extend-filesystems[1550]: Found /dev/sda9 Jan 16 18:02:05.436417 tar[1570]: linux-arm64/LICENSE Jan 16 18:02:05.436417 tar[1570]: linux-arm64/helm Jan 16 18:02:05.436739 extend-filesystems[1550]: Checking size of /dev/sda9 Jan 16 18:02:05.440564 coreos-metadata[1546]: Jan 16 18:02:05.440 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 16 18:02:05.441588 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 18:02:05.441838 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 18:02:05.450288 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 18:02:05.455040 coreos-metadata[1546]: Jan 16 18:02:05.452 INFO Fetch successful Jan 16 18:02:05.455040 coreos-metadata[1546]: Jan 16 18:02:05.453 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 16 18:02:05.461715 coreos-metadata[1546]: Jan 16 18:02:05.459 INFO Fetch successful Jan 16 18:02:05.471049 systemd[1]: motdgen.service: Deactivated successfully. Jan 16 18:02:05.475649 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 16 18:02:05.492677 jq[1594]: true Jan 16 18:02:05.501559 update_engine[1565]: I20260116 18:02:05.500638 1565 main.cc:92] Flatcar Update Engine starting Jan 16 18:02:05.504320 extend-filesystems[1550]: Resized partition /dev/sda9 Jan 16 18:02:05.511625 dbus-daemon[1547]: [system] SELinux support is enabled Jan 16 18:02:05.513185 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 16 18:02:05.513649 extend-filesystems[1608]: resize2fs 1.47.3 (8-Jul-2025) Jan 16 18:02:05.516930 update_engine[1565]: I20260116 18:02:05.516863 1565 update_check_scheduler.cc:74] Next update check in 9m47s Jan 16 18:02:05.518931 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 16 18:02:05.519235 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 16 18:02:05.520585 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 16 18:02:05.520601 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 16 18:02:05.525269 systemd[1]: Started update-engine.service - Update Engine. Jan 16 18:02:05.527968 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 8410107 blocks Jan 16 18:02:05.541021 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 16 18:02:05.734135 kernel: EXT4-fs (sda9): resized filesystem to 8410107 Jan 16 18:02:05.734997 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 18:02:05.756766 extend-filesystems[1608]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 16 18:02:05.756766 extend-filesystems[1608]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 16 18:02:05.756766 extend-filesystems[1608]: The filesystem on /dev/sda9 is now 8410107 (4k) blocks long. Jan 16 18:02:05.772966 extend-filesystems[1550]: Resized filesystem in /dev/sda9 Jan 16 18:02:05.760151 systemd-logind[1564]: New seat seat0. Jan 16 18:02:05.773782 bash[1639]: Updated "/home/core/.ssh/authorized_keys" Jan 16 18:02:05.761089 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 16 18:02:05.762338 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 16 18:02:05.763177 systemd-logind[1564]: Watching system buttons on /dev/input/event0 (Power Button) Jan 16 18:02:05.763194 systemd-logind[1564]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 16 18:02:05.766575 systemd[1]: Started systemd-logind.service - User Login Management. Jan 16 18:02:05.774035 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 16 18:02:05.775215 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 16 18:02:05.788564 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 16 18:02:05.791756 systemd[1]: Starting sshkeys.service... Jan 16 18:02:05.852341 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 16 18:02:05.857141 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 16 18:02:05.936256 coreos-metadata[1649]: Jan 16 18:02:05.936 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 16 18:02:05.939025 coreos-metadata[1649]: Jan 16 18:02:05.938 INFO Fetch successful Jan 16 18:02:05.941064 containerd[1593]: time="2026-01-16T18:02:05Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 16 18:02:05.942611 unknown[1649]: wrote ssh authorized keys file for user: core Jan 16 18:02:05.949797 containerd[1593]: time="2026-01-16T18:02:05.946187400Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 16 18:02:05.984113 update-ssh-keys[1654]: Updated "/home/core/.ssh/authorized_keys" Jan 16 18:02:05.987186 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 16 18:02:05.988025 containerd[1593]: time="2026-01-16T18:02:05.987807680Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="11.72µs" Jan 16 18:02:05.988025 containerd[1593]: time="2026-01-16T18:02:05.987845600Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 16 18:02:05.988025 containerd[1593]: time="2026-01-16T18:02:05.987938400Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 16 18:02:05.988025 containerd[1593]: time="2026-01-16T18:02:05.987969520Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 16 18:02:05.990584 containerd[1593]: time="2026-01-16T18:02:05.988142120Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 16 18:02:05.990584 containerd[1593]: time="2026-01-16T18:02:05.988174840Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 16 18:02:05.990584 containerd[1593]: time="2026-01-16T18:02:05.988235360Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 16 18:02:05.990584 containerd[1593]: time="2026-01-16T18:02:05.988246520Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 16 18:02:05.990759 containerd[1593]: time="2026-01-16T18:02:05.990652000Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 16 18:02:05.990759 containerd[1593]: time="2026-01-16T18:02:05.990675920Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 16 18:02:05.990759 containerd[1593]: time="2026-01-16T18:02:05.990690960Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 16 18:02:05.990759 containerd[1593]: time="2026-01-16T18:02:05.990698760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 16 18:02:05.992368 containerd[1593]: time="2026-01-16T18:02:05.990874440Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 16 18:02:05.992368 containerd[1593]: time="2026-01-16T18:02:05.990941320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 16 18:02:05.992368 containerd[1593]: time="2026-01-16T18:02:05.991041360Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 16 18:02:05.992368 containerd[1593]: time="2026-01-16T18:02:05.991214600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 16 18:02:05.992368 containerd[1593]: time="2026-01-16T18:02:05.992205000Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 16 18:02:05.992368 containerd[1593]: time="2026-01-16T18:02:05.992220200Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 16 18:02:05.992368 containerd[1593]: time="2026-01-16T18:02:05.992250800Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 16 18:02:05.992688 containerd[1593]: time="2026-01-16T18:02:05.992548000Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 16 18:02:05.992688 containerd[1593]: time="2026-01-16T18:02:05.992627160Z" level=info msg="metadata content store policy set" policy=shared Jan 16 18:02:05.993037 systemd[1]: Finished sshkeys.service. Jan 16 18:02:05.999000 containerd[1593]: time="2026-01-16T18:02:05.998737280Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 16 18:02:05.999000 containerd[1593]: time="2026-01-16T18:02:05.998806960Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 16 18:02:05.999990 containerd[1593]: time="2026-01-16T18:02:05.998941400Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 16 18:02:05.999990 containerd[1593]: time="2026-01-16T18:02:05.999987200Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 16 18:02:06.000087 containerd[1593]: time="2026-01-16T18:02:06.000004720Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 16 18:02:06.000087 containerd[1593]: time="2026-01-16T18:02:06.000047560Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 16 18:02:06.000087 containerd[1593]: time="2026-01-16T18:02:06.000059760Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 16 18:02:06.000087 containerd[1593]: time="2026-01-16T18:02:06.000069240Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 16 18:02:06.000087 containerd[1593]: time="2026-01-16T18:02:06.000083480Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 16 18:02:06.000311 containerd[1593]: time="2026-01-16T18:02:06.000104040Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 16 18:02:06.000311 containerd[1593]: time="2026-01-16T18:02:06.000116240Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 16 18:02:06.000311 containerd[1593]: time="2026-01-16T18:02:06.000126760Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 16 18:02:06.000311 containerd[1593]: time="2026-01-16T18:02:06.000136680Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 16 18:02:06.000311 containerd[1593]: time="2026-01-16T18:02:06.000151840Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 16 18:02:06.000311 containerd[1593]: time="2026-01-16T18:02:06.000301360Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 16 18:02:06.000406 containerd[1593]: time="2026-01-16T18:02:06.000323640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 16 18:02:06.000406 containerd[1593]: time="2026-01-16T18:02:06.000339440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 16 18:02:06.000406 containerd[1593]: time="2026-01-16T18:02:06.000350040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 16 18:02:06.000406 containerd[1593]: time="2026-01-16T18:02:06.000360440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 16 18:02:06.000406 containerd[1593]: time="2026-01-16T18:02:06.000380440Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 16 18:02:06.000406 containerd[1593]: time="2026-01-16T18:02:06.000392800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 16 18:02:06.000406 containerd[1593]: time="2026-01-16T18:02:06.000405280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 16 18:02:06.000519 containerd[1593]: time="2026-01-16T18:02:06.000416800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 16 18:02:06.000519 containerd[1593]: time="2026-01-16T18:02:06.000428920Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 16 18:02:06.000519 containerd[1593]: time="2026-01-16T18:02:06.000438560Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 16 18:02:06.000519 containerd[1593]: time="2026-01-16T18:02:06.000463880Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 16 18:02:06.000519 containerd[1593]: time="2026-01-16T18:02:06.000508720Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 16 18:02:06.001058 containerd[1593]: time="2026-01-16T18:02:06.000522800Z" level=info msg="Start snapshots syncer" Jan 16 18:02:06.001271 containerd[1593]: time="2026-01-16T18:02:06.001214880Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 16 18:02:06.003338 containerd[1593]: time="2026-01-16T18:02:06.003273600Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 16 18:02:06.003475 containerd[1593]: time="2026-01-16T18:02:06.003352400Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 16 18:02:06.003475 containerd[1593]: time="2026-01-16T18:02:06.003409920Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 16 18:02:06.003727 containerd[1593]: time="2026-01-16T18:02:06.003538600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 16 18:02:06.003727 containerd[1593]: time="2026-01-16T18:02:06.003569800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 16 18:02:06.003727 containerd[1593]: time="2026-01-16T18:02:06.003580320Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 16 18:02:06.003727 containerd[1593]: time="2026-01-16T18:02:06.003590040Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 16 18:02:06.003727 containerd[1593]: time="2026-01-16T18:02:06.003602560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 16 18:02:06.003727 containerd[1593]: time="2026-01-16T18:02:06.003613120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 16 18:02:06.003727 containerd[1593]: time="2026-01-16T18:02:06.003623520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 16 18:02:06.003727 containerd[1593]: time="2026-01-16T18:02:06.003634000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 16 18:02:06.003727 containerd[1593]: time="2026-01-16T18:02:06.003647320Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 16 18:02:06.003727 containerd[1593]: time="2026-01-16T18:02:06.003681160Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 16 18:02:06.003727 containerd[1593]: time="2026-01-16T18:02:06.003695360Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 16 18:02:06.003727 containerd[1593]: time="2026-01-16T18:02:06.003703640Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 16 18:02:06.003727 containerd[1593]: time="2026-01-16T18:02:06.003712440Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 16 18:02:06.003727 containerd[1593]: time="2026-01-16T18:02:06.003719920Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 16 18:02:06.004310 containerd[1593]: time="2026-01-16T18:02:06.003730000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 16 18:02:06.004310 containerd[1593]: time="2026-01-16T18:02:06.003740720Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 16 18:02:06.004310 containerd[1593]: time="2026-01-16T18:02:06.003818120Z" level=info msg="runtime interface created" Jan 16 18:02:06.004310 containerd[1593]: time="2026-01-16T18:02:06.003824280Z" level=info msg="created NRI interface" Jan 16 18:02:06.004310 containerd[1593]: time="2026-01-16T18:02:06.003833040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 16 18:02:06.004310 containerd[1593]: time="2026-01-16T18:02:06.003844960Z" level=info msg="Connect containerd service" Jan 16 18:02:06.004310 containerd[1593]: time="2026-01-16T18:02:06.003865400Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 16 18:02:06.008218 locksmithd[1611]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 16 18:02:06.011341 containerd[1593]: time="2026-01-16T18:02:06.011249800Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 18:02:06.198004 containerd[1593]: time="2026-01-16T18:02:06.197023760Z" level=info msg="Start subscribing containerd event" Jan 16 18:02:06.198004 containerd[1593]: time="2026-01-16T18:02:06.197086680Z" level=info msg="Start recovering state" Jan 16 18:02:06.198004 containerd[1593]: time="2026-01-16T18:02:06.197198480Z" level=info msg="Start event monitor" Jan 16 18:02:06.198004 containerd[1593]: time="2026-01-16T18:02:06.197212160Z" level=info msg="Start cni network conf syncer for default" Jan 16 18:02:06.198004 containerd[1593]: time="2026-01-16T18:02:06.197219680Z" level=info msg="Start streaming server" Jan 16 18:02:06.198004 containerd[1593]: time="2026-01-16T18:02:06.197229200Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 16 18:02:06.198004 containerd[1593]: time="2026-01-16T18:02:06.197237040Z" level=info msg="runtime interface starting up..." Jan 16 18:02:06.198004 containerd[1593]: time="2026-01-16T18:02:06.197243000Z" level=info msg="starting plugins..." Jan 16 18:02:06.198004 containerd[1593]: time="2026-01-16T18:02:06.197257320Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 16 18:02:06.199204 containerd[1593]: time="2026-01-16T18:02:06.199171120Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 16 18:02:06.199267 containerd[1593]: time="2026-01-16T18:02:06.199248360Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 16 18:02:06.200097 containerd[1593]: time="2026-01-16T18:02:06.199317840Z" level=info msg="containerd successfully booted in 0.258752s" Jan 16 18:02:06.199488 systemd[1]: Started containerd.service - containerd container runtime. Jan 16 18:02:06.215763 tar[1570]: linux-arm64/README.md Jan 16 18:02:06.237062 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 16 18:02:06.293106 systemd-networkd[1472]: eth1: Gained IPv6LL Jan 16 18:02:06.294169 systemd-timesyncd[1454]: Network configuration changed, trying to establish connection. Jan 16 18:02:06.301553 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 16 18:02:06.304235 systemd[1]: Reached target network-online.target - Network is Online. Jan 16 18:02:06.308432 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 18:02:06.312319 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 16 18:02:06.357818 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 16 18:02:06.359077 systemd-networkd[1472]: eth0: Gained IPv6LL Jan 16 18:02:06.359943 systemd-timesyncd[1454]: Network configuration changed, trying to establish connection. Jan 16 18:02:06.703460 sshd_keygen[1589]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 16 18:02:06.732058 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 16 18:02:06.737269 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 16 18:02:06.761423 systemd[1]: issuegen.service: Deactivated successfully. Jan 16 18:02:06.761974 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 16 18:02:06.767242 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 16 18:02:06.792016 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 16 18:02:06.797438 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 16 18:02:06.802014 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 16 18:02:06.803074 systemd[1]: Reached target getty.target - Login Prompts. Jan 16 18:02:07.130802 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 18:02:07.132643 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 16 18:02:07.134173 systemd[1]: Startup finished in 1.879s (kernel) + 5.197s (initrd) + 4.803s (userspace) = 11.880s. Jan 16 18:02:07.142872 (kubelet)[1709]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 18:02:07.684764 kubelet[1709]: E0116 18:02:07.684711 1709 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 18:02:07.688861 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 18:02:07.689071 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 18:02:07.691091 systemd[1]: kubelet.service: Consumed 903ms CPU time, 259.8M memory peak. Jan 16 18:02:11.399299 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 16 18:02:11.405381 systemd[1]: Started sshd@0-188.245.199.112:22-154.41.135.50:38450.service - OpenSSH per-connection server daemon (154.41.135.50:38450). Jan 16 18:02:11.506914 sshd[1721]: Connection closed by 154.41.135.50 port 38450 [preauth] Jan 16 18:02:11.510794 systemd[1]: sshd@0-188.245.199.112:22-154.41.135.50:38450.service: Deactivated successfully. Jan 16 18:02:17.940339 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 16 18:02:17.942860 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 18:02:18.102730 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 18:02:18.115809 (kubelet)[1734]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 18:02:18.170424 kubelet[1734]: E0116 18:02:18.170352 1734 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 18:02:18.173993 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 18:02:18.174204 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 18:02:18.176089 systemd[1]: kubelet.service: Consumed 182ms CPU time, 105.4M memory peak. Jan 16 18:02:28.425285 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 16 18:02:28.429122 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 18:02:28.620842 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 18:02:28.632754 (kubelet)[1749]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 18:02:28.676326 kubelet[1749]: E0116 18:02:28.676213 1749 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 18:02:28.680565 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 18:02:28.680743 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 18:02:28.681529 systemd[1]: kubelet.service: Consumed 179ms CPU time, 105.3M memory peak. Jan 16 18:02:36.360171 systemd[1]: Started sshd@1-188.245.199.112:22-68.220.241.50:44860.service - OpenSSH per-connection server daemon (68.220.241.50:44860). Jan 16 18:02:36.840479 systemd-timesyncd[1454]: Contacted time server 144.76.59.106:123 (2.flatcar.pool.ntp.org). Jan 16 18:02:36.841063 systemd-timesyncd[1454]: Initial clock synchronization to Fri 2026-01-16 18:02:37.109763 UTC. Jan 16 18:02:36.938323 sshd[1757]: Accepted publickey for core from 68.220.241.50 port 44860 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:02:36.941483 sshd-session[1757]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:02:36.953096 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 16 18:02:36.955688 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 16 18:02:36.957935 systemd-logind[1564]: New session 1 of user core. Jan 16 18:02:36.985348 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 16 18:02:36.990964 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 16 18:02:37.006218 (systemd)[1763]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:02:37.010787 systemd-logind[1564]: New session 2 of user core. Jan 16 18:02:37.155816 systemd[1763]: Queued start job for default target default.target. Jan 16 18:02:37.166918 systemd[1763]: Created slice app.slice - User Application Slice. Jan 16 18:02:37.167025 systemd[1763]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 16 18:02:37.167055 systemd[1763]: Reached target paths.target - Paths. Jan 16 18:02:37.167158 systemd[1763]: Reached target timers.target - Timers. Jan 16 18:02:37.169772 systemd[1763]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 16 18:02:37.173167 systemd[1763]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 16 18:02:37.194341 systemd[1763]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 16 18:02:37.194426 systemd[1763]: Reached target sockets.target - Sockets. Jan 16 18:02:37.198029 systemd[1763]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 16 18:02:37.198162 systemd[1763]: Reached target basic.target - Basic System. Jan 16 18:02:37.198228 systemd[1763]: Reached target default.target - Main User Target. Jan 16 18:02:37.198257 systemd[1763]: Startup finished in 179ms. Jan 16 18:02:37.198587 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 16 18:02:37.203264 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 16 18:02:37.532706 systemd[1]: Started sshd@2-188.245.199.112:22-68.220.241.50:44866.service - OpenSSH per-connection server daemon (68.220.241.50:44866). Jan 16 18:02:38.092352 sshd[1777]: Accepted publickey for core from 68.220.241.50 port 44866 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:02:38.094902 sshd-session[1777]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:02:38.101813 systemd-logind[1564]: New session 3 of user core. Jan 16 18:02:38.104269 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 16 18:02:38.394114 sshd[1781]: Connection closed by 68.220.241.50 port 44866 Jan 16 18:02:38.395470 sshd-session[1777]: pam_unix(sshd:session): session closed for user core Jan 16 18:02:38.403420 systemd-logind[1564]: Session 3 logged out. Waiting for processes to exit. Jan 16 18:02:38.404133 systemd[1]: sshd@2-188.245.199.112:22-68.220.241.50:44866.service: Deactivated successfully. Jan 16 18:02:38.406089 systemd[1]: session-3.scope: Deactivated successfully. Jan 16 18:02:38.410429 systemd-logind[1564]: Removed session 3. Jan 16 18:02:38.516425 systemd[1]: Started sshd@3-188.245.199.112:22-68.220.241.50:44872.service - OpenSSH per-connection server daemon (68.220.241.50:44872). Jan 16 18:02:38.932360 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 16 18:02:38.936968 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 18:02:39.103243 sshd[1787]: Accepted publickey for core from 68.220.241.50 port 44872 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:02:39.105163 sshd-session[1787]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:02:39.112587 systemd-logind[1564]: New session 4 of user core. Jan 16 18:02:39.115920 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 18:02:39.124509 (kubelet)[1799]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 18:02:39.124740 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 16 18:02:39.173057 kubelet[1799]: E0116 18:02:39.172964 1799 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 18:02:39.175841 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 18:02:39.176014 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 18:02:39.177143 systemd[1]: kubelet.service: Consumed 173ms CPU time, 105.7M memory peak. Jan 16 18:02:39.413301 sshd[1800]: Connection closed by 68.220.241.50 port 44872 Jan 16 18:02:39.415651 sshd-session[1787]: pam_unix(sshd:session): session closed for user core Jan 16 18:02:39.421348 systemd[1]: sshd@3-188.245.199.112:22-68.220.241.50:44872.service: Deactivated successfully. Jan 16 18:02:39.424132 systemd[1]: session-4.scope: Deactivated successfully. Jan 16 18:02:39.426877 systemd-logind[1564]: Session 4 logged out. Waiting for processes to exit. Jan 16 18:02:39.432307 systemd-logind[1564]: Removed session 4. Jan 16 18:02:39.530264 systemd[1]: Started sshd@4-188.245.199.112:22-68.220.241.50:44876.service - OpenSSH per-connection server daemon (68.220.241.50:44876). Jan 16 18:02:40.111574 sshd[1812]: Accepted publickey for core from 68.220.241.50 port 44876 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:02:40.113586 sshd-session[1812]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:02:40.120633 systemd-logind[1564]: New session 5 of user core. Jan 16 18:02:40.127377 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 16 18:02:40.428285 sshd[1816]: Connection closed by 68.220.241.50 port 44876 Jan 16 18:02:40.429501 sshd-session[1812]: pam_unix(sshd:session): session closed for user core Jan 16 18:02:40.437190 systemd[1]: sshd@4-188.245.199.112:22-68.220.241.50:44876.service: Deactivated successfully. Jan 16 18:02:40.440511 systemd[1]: session-5.scope: Deactivated successfully. Jan 16 18:02:40.442821 systemd-logind[1564]: Session 5 logged out. Waiting for processes to exit. Jan 16 18:02:40.444908 systemd-logind[1564]: Removed session 5. Jan 16 18:02:40.551332 systemd[1]: Started sshd@5-188.245.199.112:22-68.220.241.50:44884.service - OpenSSH per-connection server daemon (68.220.241.50:44884). Jan 16 18:02:41.132797 sshd[1822]: Accepted publickey for core from 68.220.241.50 port 44884 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:02:41.133941 sshd-session[1822]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:02:41.138999 systemd-logind[1564]: New session 6 of user core. Jan 16 18:02:41.147551 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 16 18:02:41.359698 sudo[1827]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 16 18:02:41.360487 sudo[1827]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 18:02:41.371618 sudo[1827]: pam_unix(sudo:session): session closed for user root Jan 16 18:02:41.476742 sshd[1826]: Connection closed by 68.220.241.50 port 44884 Jan 16 18:02:41.478297 sshd-session[1822]: pam_unix(sshd:session): session closed for user core Jan 16 18:02:41.484685 systemd[1]: sshd@5-188.245.199.112:22-68.220.241.50:44884.service: Deactivated successfully. Jan 16 18:02:41.487814 systemd[1]: session-6.scope: Deactivated successfully. Jan 16 18:02:41.491122 systemd-logind[1564]: Session 6 logged out. Waiting for processes to exit. Jan 16 18:02:41.492513 systemd-logind[1564]: Removed session 6. Jan 16 18:02:41.587623 systemd[1]: Started sshd@6-188.245.199.112:22-68.220.241.50:44898.service - OpenSSH per-connection server daemon (68.220.241.50:44898). Jan 16 18:02:42.137599 sshd[1834]: Accepted publickey for core from 68.220.241.50 port 44898 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:02:42.139715 sshd-session[1834]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:02:42.146455 systemd-logind[1564]: New session 7 of user core. Jan 16 18:02:42.155329 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 16 18:02:42.341570 sudo[1840]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 16 18:02:42.341883 sudo[1840]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 18:02:42.344801 sudo[1840]: pam_unix(sudo:session): session closed for user root Jan 16 18:02:42.354536 sudo[1839]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 16 18:02:42.355262 sudo[1839]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 18:02:42.366825 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 16 18:02:42.414000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 16 18:02:42.417012 kernel: kauditd_printk_skb: 172 callbacks suppressed Jan 16 18:02:42.417177 kernel: audit: type=1305 audit(1768586562.414:215): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 16 18:02:42.417242 kernel: audit: type=1300 audit(1768586562.414:215): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc5a69730 a2=420 a3=0 items=0 ppid=1845 pid=1864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:42.414000 audit[1864]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc5a69730 a2=420 a3=0 items=0 ppid=1845 pid=1864 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:42.414000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 16 18:02:42.419784 augenrules[1864]: No rules Jan 16 18:02:42.420487 kernel: audit: type=1327 audit(1768586562.414:215): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 16 18:02:42.421466 systemd[1]: audit-rules.service: Deactivated successfully. Jan 16 18:02:42.421712 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 16 18:02:42.422000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:42.422000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:42.425710 sudo[1839]: pam_unix(sudo:session): session closed for user root Jan 16 18:02:42.427390 kernel: audit: type=1130 audit(1768586562.422:216): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:42.427465 kernel: audit: type=1131 audit(1768586562.422:217): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:42.427486 kernel: audit: type=1106 audit(1768586562.423:218): pid=1839 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 18:02:42.423000 audit[1839]: USER_END pid=1839 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 18:02:42.423000 audit[1839]: CRED_DISP pid=1839 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 18:02:42.430487 kernel: audit: type=1104 audit(1768586562.423:219): pid=1839 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 18:02:42.524642 sshd[1838]: Connection closed by 68.220.241.50 port 44898 Jan 16 18:02:42.525068 sshd-session[1834]: pam_unix(sshd:session): session closed for user core Jan 16 18:02:42.526000 audit[1834]: USER_END pid=1834 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:02:42.535143 kernel: audit: type=1106 audit(1768586562.526:220): pid=1834 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:02:42.535223 kernel: audit: type=1104 audit(1768586562.526:221): pid=1834 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:02:42.526000 audit[1834]: CRED_DISP pid=1834 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:02:42.534455 systemd[1]: sshd@6-188.245.199.112:22-68.220.241.50:44898.service: Deactivated successfully. Jan 16 18:02:42.533000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-188.245.199.112:22-68.220.241.50:44898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:42.540015 kernel: audit: type=1131 audit(1768586562.533:222): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-188.245.199.112:22-68.220.241.50:44898 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:42.537265 systemd[1]: session-7.scope: Deactivated successfully. Jan 16 18:02:42.541632 systemd-logind[1564]: Session 7 logged out. Waiting for processes to exit. Jan 16 18:02:42.543159 systemd-logind[1564]: Removed session 7. Jan 16 18:02:42.635000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-188.245.199.112:22-68.220.241.50:45960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:42.636674 systemd[1]: Started sshd@7-188.245.199.112:22-68.220.241.50:45960.service - OpenSSH per-connection server daemon (68.220.241.50:45960). Jan 16 18:02:43.206000 audit[1873]: USER_ACCT pid=1873 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:02:43.207777 sshd[1873]: Accepted publickey for core from 68.220.241.50 port 45960 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:02:43.208000 audit[1873]: CRED_ACQ pid=1873 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:02:43.208000 audit[1873]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffe5ecf00 a2=3 a3=0 items=0 ppid=1 pid=1873 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:43.208000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:02:43.210378 sshd-session[1873]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:02:43.217780 systemd-logind[1564]: New session 8 of user core. Jan 16 18:02:43.229387 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 16 18:02:43.232000 audit[1873]: USER_START pid=1873 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:02:43.234000 audit[1877]: CRED_ACQ pid=1877 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:02:43.414000 audit[1878]: USER_ACCT pid=1878 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 18:02:43.414000 audit[1878]: CRED_REFR pid=1878 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 18:02:43.415000 audit[1878]: USER_START pid=1878 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 18:02:43.415585 sudo[1878]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 16 18:02:43.416248 sudo[1878]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 18:02:43.749406 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 16 18:02:43.778538 (dockerd)[1896]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 16 18:02:44.034676 dockerd[1896]: time="2026-01-16T18:02:44.034509024Z" level=info msg="Starting up" Jan 16 18:02:44.036885 dockerd[1896]: time="2026-01-16T18:02:44.036811765Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 16 18:02:44.051640 dockerd[1896]: time="2026-01-16T18:02:44.051509714Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 16 18:02:44.069924 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport559377407-merged.mount: Deactivated successfully. Jan 16 18:02:44.088182 systemd[1]: var-lib-docker-metacopy\x2dcheck3645187655-merged.mount: Deactivated successfully. Jan 16 18:02:44.096440 dockerd[1896]: time="2026-01-16T18:02:44.096325225Z" level=info msg="Loading containers: start." Jan 16 18:02:44.108991 kernel: Initializing XFRM netlink socket Jan 16 18:02:44.170000 audit[1945]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1945 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:02:44.170000 audit[1945]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=fffff22361b0 a2=0 a3=0 items=0 ppid=1896 pid=1945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.170000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 16 18:02:44.173000 audit[1947]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1947 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:02:44.173000 audit[1947]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffca62a6c0 a2=0 a3=0 items=0 ppid=1896 pid=1947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.173000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 16 18:02:44.175000 audit[1949]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1949 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:02:44.175000 audit[1949]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff97c3790 a2=0 a3=0 items=0 ppid=1896 pid=1949 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.175000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 16 18:02:44.177000 audit[1951]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1951 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:02:44.177000 audit[1951]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffd59bb70 a2=0 a3=0 items=0 ppid=1896 pid=1951 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.177000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 16 18:02:44.182000 audit[1953]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1953 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:02:44.182000 audit[1953]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd85bdbe0 a2=0 a3=0 items=0 ppid=1896 pid=1953 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.182000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 16 18:02:44.184000 audit[1955]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1955 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:02:44.184000 audit[1955]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffc874ee30 a2=0 a3=0 items=0 ppid=1896 pid=1955 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.184000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 16 18:02:44.187000 audit[1957]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1957 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:02:44.187000 audit[1957]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffdf863fc0 a2=0 a3=0 items=0 ppid=1896 pid=1957 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.187000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 16 18:02:44.189000 audit[1959]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1959 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:02:44.189000 audit[1959]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffdaf05d30 a2=0 a3=0 items=0 ppid=1896 pid=1959 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.189000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 16 18:02:44.226000 audit[1962]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1962 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:02:44.226000 audit[1962]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffdf094720 a2=0 a3=0 items=0 ppid=1896 pid=1962 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.226000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 16 18:02:44.228000 audit[1964]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1964 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:02:44.228000 audit[1964]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffff14d84f0 a2=0 a3=0 items=0 ppid=1896 pid=1964 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.228000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 16 18:02:44.231000 audit[1966]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1966 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:02:44.231000 audit[1966]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffc23774e0 a2=0 a3=0 items=0 ppid=1896 pid=1966 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.231000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 16 18:02:44.234000 audit[1968]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1968 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:02:44.234000 audit[1968]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffc1445720 a2=0 a3=0 items=0 ppid=1896 pid=1968 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.234000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 16 18:02:44.236000 audit[1970]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1970 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:02:44.236000 audit[1970]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffc2507bd0 a2=0 a3=0 items=0 ppid=1896 pid=1970 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.236000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 16 18:02:44.276000 audit[2000]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2000 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:02:44.276000 audit[2000]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffde87f2f0 a2=0 a3=0 items=0 ppid=1896 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.276000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 16 18:02:44.278000 audit[2002]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2002 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:02:44.278000 audit[2002]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe2f63fc0 a2=0 a3=0 items=0 ppid=1896 pid=2002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.278000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 16 18:02:44.281000 audit[2004]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2004 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:02:44.281000 audit[2004]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff9d6f330 a2=0 a3=0 items=0 ppid=1896 pid=2004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.281000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 16 18:02:44.283000 audit[2006]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2006 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:02:44.283000 audit[2006]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc8dcc770 a2=0 a3=0 items=0 ppid=1896 pid=2006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.283000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 16 18:02:44.287000 audit[2008]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:02:44.287000 audit[2008]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff3a83960 a2=0 a3=0 items=0 ppid=1896 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.287000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 16 18:02:44.289000 audit[2010]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2010 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:02:44.289000 audit[2010]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff52a93a0 a2=0 a3=0 items=0 ppid=1896 pid=2010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.289000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 16 18:02:44.292000 audit[2012]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2012 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:02:44.292000 audit[2012]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffff520e20 a2=0 a3=0 items=0 ppid=1896 pid=2012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.292000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 16 18:02:44.294000 audit[2014]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2014 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:02:44.294000 audit[2014]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=fffff5b399e0 a2=0 a3=0 items=0 ppid=1896 pid=2014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.294000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 16 18:02:44.297000 audit[2016]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2016 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:02:44.297000 audit[2016]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=fffffb798dd0 a2=0 a3=0 items=0 ppid=1896 pid=2016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.297000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 16 18:02:44.299000 audit[2018]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2018 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:02:44.299000 audit[2018]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffef4bcc70 a2=0 a3=0 items=0 ppid=1896 pid=2018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.299000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 16 18:02:44.301000 audit[2020]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2020 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:02:44.301000 audit[2020]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffdc1ec680 a2=0 a3=0 items=0 ppid=1896 pid=2020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.301000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 16 18:02:44.303000 audit[2022]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2022 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:02:44.303000 audit[2022]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffd9fc5660 a2=0 a3=0 items=0 ppid=1896 pid=2022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.303000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 16 18:02:44.305000 audit[2024]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2024 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:02:44.305000 audit[2024]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffcb4725b0 a2=0 a3=0 items=0 ppid=1896 pid=2024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.305000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 16 18:02:44.311000 audit[2029]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2029 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:02:44.311000 audit[2029]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc425a0a0 a2=0 a3=0 items=0 ppid=1896 pid=2029 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.311000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 16 18:02:44.314000 audit[2031]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:02:44.314000 audit[2031]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=fffff3df89b0 a2=0 a3=0 items=0 ppid=1896 pid=2031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.314000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 16 18:02:44.316000 audit[2033]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2033 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:02:44.316000 audit[2033]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffdc773be0 a2=0 a3=0 items=0 ppid=1896 pid=2033 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.316000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 16 18:02:44.319000 audit[2035]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2035 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:02:44.319000 audit[2035]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffcc3dd4b0 a2=0 a3=0 items=0 ppid=1896 pid=2035 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.319000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 16 18:02:44.321000 audit[2037]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2037 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:02:44.321000 audit[2037]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffdc6226d0 a2=0 a3=0 items=0 ppid=1896 pid=2037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.321000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 16 18:02:44.323000 audit[2039]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2039 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:02:44.323000 audit[2039]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffcf99eb30 a2=0 a3=0 items=0 ppid=1896 pid=2039 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.323000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 16 18:02:44.343000 audit[2044]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:02:44.343000 audit[2044]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=fffff72a02e0 a2=0 a3=0 items=0 ppid=1896 pid=2044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.343000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 16 18:02:44.346000 audit[2046]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2046 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:02:44.346000 audit[2046]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffc36a9b30 a2=0 a3=0 items=0 ppid=1896 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.346000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 16 18:02:44.359000 audit[2054]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:02:44.359000 audit[2054]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=fffff15ed210 a2=0 a3=0 items=0 ppid=1896 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.359000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 16 18:02:44.374000 audit[2060]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2060 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:02:44.374000 audit[2060]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffe58c9a20 a2=0 a3=0 items=0 ppid=1896 pid=2060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.374000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 16 18:02:44.378000 audit[2062]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:02:44.378000 audit[2062]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffca4d7a70 a2=0 a3=0 items=0 ppid=1896 pid=2062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.378000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 16 18:02:44.380000 audit[2064]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2064 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:02:44.380000 audit[2064]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffebb9a1a0 a2=0 a3=0 items=0 ppid=1896 pid=2064 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.380000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 16 18:02:44.383000 audit[2066]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2066 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:02:44.383000 audit[2066]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffc7550300 a2=0 a3=0 items=0 ppid=1896 pid=2066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.383000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 16 18:02:44.386000 audit[2068]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2068 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:02:44.386000 audit[2068]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc1139bc0 a2=0 a3=0 items=0 ppid=1896 pid=2068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:02:44.386000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 16 18:02:44.387863 systemd-networkd[1472]: docker0: Link UP Jan 16 18:02:44.392811 dockerd[1896]: time="2026-01-16T18:02:44.392729382Z" level=info msg="Loading containers: done." Jan 16 18:02:44.418931 dockerd[1896]: time="2026-01-16T18:02:44.418867758Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 16 18:02:44.419196 dockerd[1896]: time="2026-01-16T18:02:44.419018526Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 16 18:02:44.419229 dockerd[1896]: time="2026-01-16T18:02:44.419198313Z" level=info msg="Initializing buildkit" Jan 16 18:02:44.445226 dockerd[1896]: time="2026-01-16T18:02:44.445178625Z" level=info msg="Completed buildkit initialization" Jan 16 18:02:44.452679 dockerd[1896]: time="2026-01-16T18:02:44.452618709Z" level=info msg="Daemon has completed initialization" Jan 16 18:02:44.452829 dockerd[1896]: time="2026-01-16T18:02:44.452688014Z" level=info msg="API listen on /run/docker.sock" Jan 16 18:02:44.453264 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 16 18:02:44.453000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:45.066946 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck77175218-merged.mount: Deactivated successfully. Jan 16 18:02:45.509149 containerd[1593]: time="2026-01-16T18:02:45.508693649Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 16 18:02:46.158181 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2702583743.mount: Deactivated successfully. Jan 16 18:02:47.045976 containerd[1593]: time="2026-01-16T18:02:47.045626307Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:02:47.047678 containerd[1593]: time="2026-01-16T18:02:47.047616790Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=25791094" Jan 16 18:02:47.049995 containerd[1593]: time="2026-01-16T18:02:47.049074069Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:02:47.052339 containerd[1593]: time="2026-01-16T18:02:47.052304079Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:02:47.054542 containerd[1593]: time="2026-01-16T18:02:47.054495083Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 1.545727134s" Jan 16 18:02:47.054700 containerd[1593]: time="2026-01-16T18:02:47.054680794Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Jan 16 18:02:47.056609 containerd[1593]: time="2026-01-16T18:02:47.056502191Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 16 18:02:48.436697 containerd[1593]: time="2026-01-16T18:02:48.436642132Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:02:48.438590 containerd[1593]: time="2026-01-16T18:02:48.438552470Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23547679" Jan 16 18:02:48.439986 containerd[1593]: time="2026-01-16T18:02:48.439567600Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:02:48.442807 containerd[1593]: time="2026-01-16T18:02:48.442769837Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:02:48.443477 containerd[1593]: time="2026-01-16T18:02:48.443453174Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.386676824s" Jan 16 18:02:48.443577 containerd[1593]: time="2026-01-16T18:02:48.443563503Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Jan 16 18:02:48.444010 containerd[1593]: time="2026-01-16T18:02:48.443990710Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 16 18:02:49.202380 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 16 18:02:49.204406 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 18:02:49.379501 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 18:02:49.382429 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 16 18:02:49.382501 kernel: audit: type=1130 audit(1768586569.378:273): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:49.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:49.389553 (kubelet)[2180]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 18:02:49.444996 kubelet[2180]: E0116 18:02:49.443316 2180 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 18:02:49.448771 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 18:02:49.450000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 18:02:49.449126 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 18:02:49.451111 systemd[1]: kubelet.service: Consumed 167ms CPU time, 105M memory peak. Jan 16 18:02:49.453978 kernel: audit: type=1131 audit(1768586569.450:274): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 18:02:49.693030 containerd[1593]: time="2026-01-16T18:02:49.692971118Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:02:49.694619 containerd[1593]: time="2026-01-16T18:02:49.694560911Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18289931" Jan 16 18:02:49.695287 containerd[1593]: time="2026-01-16T18:02:49.695241175Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:02:49.697987 containerd[1593]: time="2026-01-16T18:02:49.697957237Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:02:49.698780 containerd[1593]: time="2026-01-16T18:02:49.698747845Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.254561035s" Jan 16 18:02:49.698780 containerd[1593]: time="2026-01-16T18:02:49.698781834Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Jan 16 18:02:49.700055 containerd[1593]: time="2026-01-16T18:02:49.700031414Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 16 18:02:50.673196 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2730033169.mount: Deactivated successfully. Jan 16 18:02:51.054506 containerd[1593]: time="2026-01-16T18:02:51.054452155Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:02:51.055701 containerd[1593]: time="2026-01-16T18:02:51.055486942Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=0" Jan 16 18:02:51.056605 containerd[1593]: time="2026-01-16T18:02:51.056564631Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:02:51.060311 containerd[1593]: time="2026-01-16T18:02:51.060247340Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:02:51.062067 containerd[1593]: time="2026-01-16T18:02:51.061937434Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.361781773s" Jan 16 18:02:51.062067 containerd[1593]: time="2026-01-16T18:02:51.062024644Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Jan 16 18:02:51.062729 containerd[1593]: time="2026-01-16T18:02:51.062583412Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 16 18:02:51.259201 update_engine[1565]: I20260116 18:02:51.259083 1565 update_attempter.cc:509] Updating boot flags... Jan 16 18:02:51.626058 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3813564156.mount: Deactivated successfully. Jan 16 18:02:52.413477 containerd[1593]: time="2026-01-16T18:02:52.413426388Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:02:52.415966 containerd[1593]: time="2026-01-16T18:02:52.415881215Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=18338344" Jan 16 18:02:52.417063 containerd[1593]: time="2026-01-16T18:02:52.416504635Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:02:52.420362 containerd[1593]: time="2026-01-16T18:02:52.420315959Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:02:52.421944 containerd[1593]: time="2026-01-16T18:02:52.421895666Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.358984568s" Jan 16 18:02:52.422045 containerd[1593]: time="2026-01-16T18:02:52.421942719Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jan 16 18:02:52.422930 containerd[1593]: time="2026-01-16T18:02:52.422828928Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 16 18:02:52.997201 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4028289422.mount: Deactivated successfully. Jan 16 18:02:53.005985 containerd[1593]: time="2026-01-16T18:02:53.005566742Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 18:02:53.008358 containerd[1593]: time="2026-01-16T18:02:53.008290651Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 16 18:02:53.009610 containerd[1593]: time="2026-01-16T18:02:53.009573814Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 18:02:53.014064 containerd[1593]: time="2026-01-16T18:02:53.013993600Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 18:02:53.015804 containerd[1593]: time="2026-01-16T18:02:53.015651648Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 592.596623ms" Jan 16 18:02:53.015804 containerd[1593]: time="2026-01-16T18:02:53.015691405Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 16 18:02:53.017081 containerd[1593]: time="2026-01-16T18:02:53.017021394Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 16 18:02:53.762993 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1495630407.mount: Deactivated successfully. Jan 16 18:02:55.684190 containerd[1593]: time="2026-01-16T18:02:55.683576164Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:02:55.685602 containerd[1593]: time="2026-01-16T18:02:55.685534148Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=57926377" Jan 16 18:02:55.686018 containerd[1593]: time="2026-01-16T18:02:55.685972879Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:02:55.689987 containerd[1593]: time="2026-01-16T18:02:55.689214961Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:02:55.690240 containerd[1593]: time="2026-01-16T18:02:55.690200392Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.673116358s" Jan 16 18:02:55.690240 containerd[1593]: time="2026-01-16T18:02:55.690238267Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jan 16 18:02:59.451760 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 5. Jan 16 18:02:59.455490 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 18:02:59.635167 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 18:02:59.634000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:59.640961 kernel: audit: type=1130 audit(1768586579.634:275): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:02:59.645412 (kubelet)[2355]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 18:02:59.687229 kubelet[2355]: E0116 18:02:59.684502 2355 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 18:02:59.688191 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 18:02:59.688449 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 18:02:59.689038 systemd[1]: kubelet.service: Consumed 171ms CPU time, 105.8M memory peak. Jan 16 18:02:59.688000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 18:02:59.692969 kernel: audit: type=1131 audit(1768586579.688:276): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 18:03:00.719973 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 18:03:00.719000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:00.720325 systemd[1]: kubelet.service: Consumed 171ms CPU time, 105.8M memory peak. Jan 16 18:03:00.719000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:00.728155 kernel: audit: type=1130 audit(1768586580.719:277): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:00.728300 kernel: audit: type=1131 audit(1768586580.719:278): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:00.726331 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 18:03:00.763238 systemd[1]: Reload requested from client PID 2369 ('systemctl') (unit session-8.scope)... Jan 16 18:03:00.763256 systemd[1]: Reloading... Jan 16 18:03:00.921971 zram_generator::config[2416]: No configuration found. Jan 16 18:03:01.128379 systemd[1]: Reloading finished in 364 ms. Jan 16 18:03:01.156000 audit: BPF prog-id=61 op=LOAD Jan 16 18:03:01.160202 kernel: audit: type=1334 audit(1768586581.156:279): prog-id=61 op=LOAD Jan 16 18:03:01.160319 kernel: audit: type=1334 audit(1768586581.156:280): prog-id=41 op=UNLOAD Jan 16 18:03:01.156000 audit: BPF prog-id=41 op=UNLOAD Jan 16 18:03:01.156000 audit: BPF prog-id=62 op=LOAD Jan 16 18:03:01.161974 kernel: audit: type=1334 audit(1768586581.156:281): prog-id=62 op=LOAD Jan 16 18:03:01.156000 audit: BPF prog-id=63 op=LOAD Jan 16 18:03:01.163159 kernel: audit: type=1334 audit(1768586581.156:282): prog-id=63 op=LOAD Jan 16 18:03:01.156000 audit: BPF prog-id=42 op=UNLOAD Jan 16 18:03:01.164217 kernel: audit: type=1334 audit(1768586581.156:283): prog-id=42 op=UNLOAD Jan 16 18:03:01.158000 audit: BPF prog-id=43 op=UNLOAD Jan 16 18:03:01.159000 audit: BPF prog-id=64 op=LOAD Jan 16 18:03:01.159000 audit: BPF prog-id=57 op=UNLOAD Jan 16 18:03:01.165976 kernel: audit: type=1334 audit(1768586581.158:284): prog-id=43 op=UNLOAD Jan 16 18:03:01.167000 audit: BPF prog-id=65 op=LOAD Jan 16 18:03:01.168000 audit: BPF prog-id=51 op=UNLOAD Jan 16 18:03:01.168000 audit: BPF prog-id=66 op=LOAD Jan 16 18:03:01.168000 audit: BPF prog-id=67 op=LOAD Jan 16 18:03:01.168000 audit: BPF prog-id=52 op=UNLOAD Jan 16 18:03:01.168000 audit: BPF prog-id=53 op=UNLOAD Jan 16 18:03:01.169000 audit: BPF prog-id=68 op=LOAD Jan 16 18:03:01.169000 audit: BPF prog-id=69 op=LOAD Jan 16 18:03:01.169000 audit: BPF prog-id=54 op=UNLOAD Jan 16 18:03:01.169000 audit: BPF prog-id=55 op=UNLOAD Jan 16 18:03:01.170000 audit: BPF prog-id=70 op=LOAD Jan 16 18:03:01.170000 audit: BPF prog-id=56 op=UNLOAD Jan 16 18:03:01.170000 audit: BPF prog-id=71 op=LOAD Jan 16 18:03:01.170000 audit: BPF prog-id=44 op=UNLOAD Jan 16 18:03:01.170000 audit: BPF prog-id=72 op=LOAD Jan 16 18:03:01.170000 audit: BPF prog-id=73 op=LOAD Jan 16 18:03:01.170000 audit: BPF prog-id=45 op=UNLOAD Jan 16 18:03:01.170000 audit: BPF prog-id=46 op=UNLOAD Jan 16 18:03:01.171000 audit: BPF prog-id=74 op=LOAD Jan 16 18:03:01.171000 audit: BPF prog-id=50 op=UNLOAD Jan 16 18:03:01.173000 audit: BPF prog-id=75 op=LOAD Jan 16 18:03:01.173000 audit: BPF prog-id=58 op=UNLOAD Jan 16 18:03:01.173000 audit: BPF prog-id=76 op=LOAD Jan 16 18:03:01.173000 audit: BPF prog-id=77 op=LOAD Jan 16 18:03:01.173000 audit: BPF prog-id=59 op=UNLOAD Jan 16 18:03:01.173000 audit: BPF prog-id=60 op=UNLOAD Jan 16 18:03:01.174000 audit: BPF prog-id=78 op=LOAD Jan 16 18:03:01.174000 audit: BPF prog-id=47 op=UNLOAD Jan 16 18:03:01.174000 audit: BPF prog-id=79 op=LOAD Jan 16 18:03:01.174000 audit: BPF prog-id=80 op=LOAD Jan 16 18:03:01.174000 audit: BPF prog-id=48 op=UNLOAD Jan 16 18:03:01.174000 audit: BPF prog-id=49 op=UNLOAD Jan 16 18:03:01.193030 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 16 18:03:01.193119 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 16 18:03:01.194120 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 18:03:01.193000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 18:03:01.194206 systemd[1]: kubelet.service: Consumed 118ms CPU time, 95.1M memory peak. Jan 16 18:03:01.199025 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 18:03:01.380000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:01.381035 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 18:03:01.392344 (kubelet)[2465]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 18:03:01.436274 kubelet[2465]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 18:03:01.436625 kubelet[2465]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 16 18:03:01.436684 kubelet[2465]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 18:03:01.436830 kubelet[2465]: I0116 18:03:01.436799 2465 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 18:03:03.268409 kubelet[2465]: I0116 18:03:03.268346 2465 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 16 18:03:03.270119 kubelet[2465]: I0116 18:03:03.269035 2465 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 18:03:03.270119 kubelet[2465]: I0116 18:03:03.269404 2465 server.go:956] "Client rotation is on, will bootstrap in background" Jan 16 18:03:03.296057 kubelet[2465]: E0116 18:03:03.295996 2465 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://188.245.199.112:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 188.245.199.112:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 16 18:03:03.296641 kubelet[2465]: I0116 18:03:03.296498 2465 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 18:03:03.306437 kubelet[2465]: I0116 18:03:03.306382 2465 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 16 18:03:03.310812 kubelet[2465]: I0116 18:03:03.310755 2465 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 18:03:03.311851 kubelet[2465]: I0116 18:03:03.311786 2465 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 18:03:03.312114 kubelet[2465]: I0116 18:03:03.311842 2465 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4580-0-0-p-f44e0c3b96","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 16 18:03:03.312309 kubelet[2465]: I0116 18:03:03.312170 2465 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 18:03:03.312309 kubelet[2465]: I0116 18:03:03.312180 2465 container_manager_linux.go:303] "Creating device plugin manager" Jan 16 18:03:03.313453 kubelet[2465]: I0116 18:03:03.313420 2465 state_mem.go:36] "Initialized new in-memory state store" Jan 16 18:03:03.318217 kubelet[2465]: I0116 18:03:03.318164 2465 kubelet.go:480] "Attempting to sync node with API server" Jan 16 18:03:03.318217 kubelet[2465]: I0116 18:03:03.318214 2465 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 18:03:03.318370 kubelet[2465]: I0116 18:03:03.318270 2465 kubelet.go:386] "Adding apiserver pod source" Jan 16 18:03:03.318370 kubelet[2465]: I0116 18:03:03.318297 2465 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 18:03:03.325816 kubelet[2465]: I0116 18:03:03.325782 2465 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 16 18:03:03.326585 kubelet[2465]: I0116 18:03:03.326556 2465 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 16 18:03:03.326750 kubelet[2465]: W0116 18:03:03.326691 2465 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 16 18:03:03.328861 kubelet[2465]: E0116 18:03:03.328826 2465 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://188.245.199.112:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4580-0-0-p-f44e0c3b96&limit=500&resourceVersion=0\": dial tcp 188.245.199.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 16 18:03:03.329271 kubelet[2465]: E0116 18:03:03.329097 2465 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://188.245.199.112:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 188.245.199.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 16 18:03:03.329696 kubelet[2465]: I0116 18:03:03.329623 2465 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 16 18:03:03.329696 kubelet[2465]: I0116 18:03:03.329675 2465 server.go:1289] "Started kubelet" Jan 16 18:03:03.331836 kubelet[2465]: I0116 18:03:03.331795 2465 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 18:03:03.332087 kubelet[2465]: I0116 18:03:03.332019 2465 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 18:03:03.332404 kubelet[2465]: I0116 18:03:03.332376 2465 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 18:03:03.333197 kubelet[2465]: I0116 18:03:03.333176 2465 server.go:317] "Adding debug handlers to kubelet server" Jan 16 18:03:03.335814 kubelet[2465]: I0116 18:03:03.335791 2465 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 18:03:03.337630 kubelet[2465]: E0116 18:03:03.335743 2465 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://188.245.199.112:6443/api/v1/namespaces/default/events\": dial tcp 188.245.199.112:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4580-0-0-p-f44e0c3b96.188b48208de297fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4580-0-0-p-f44e0c3b96,UID:ci-4580-0-0-p-f44e0c3b96,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4580-0-0-p-f44e0c3b96,},FirstTimestamp:2026-01-16 18:03:03.329642494 +0000 UTC m=+1.931372699,LastTimestamp:2026-01-16 18:03:03.329642494 +0000 UTC m=+1.931372699,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4580-0-0-p-f44e0c3b96,}" Jan 16 18:03:03.339910 kubelet[2465]: I0116 18:03:03.339236 2465 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 16 18:03:03.342297 kubelet[2465]: E0116 18:03:03.342267 2465 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 18:03:03.342540 kubelet[2465]: E0116 18:03:03.342516 2465 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4580-0-0-p-f44e0c3b96\" not found" Jan 16 18:03:03.342540 kubelet[2465]: I0116 18:03:03.342544 2465 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 16 18:03:03.342770 kubelet[2465]: I0116 18:03:03.342745 2465 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 16 18:03:03.342829 kubelet[2465]: I0116 18:03:03.342814 2465 reconciler.go:26] "Reconciler: start to sync state" Jan 16 18:03:03.343843 kubelet[2465]: E0116 18:03:03.343805 2465 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://188.245.199.112:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 188.245.199.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 16 18:03:03.344093 kubelet[2465]: I0116 18:03:03.344066 2465 factory.go:223] Registration of the systemd container factory successfully Jan 16 18:03:03.344191 kubelet[2465]: I0116 18:03:03.344168 2465 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 18:03:03.346069 kubelet[2465]: E0116 18:03:03.346021 2465 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.199.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4580-0-0-p-f44e0c3b96?timeout=10s\": dial tcp 188.245.199.112:6443: connect: connection refused" interval="200ms" Jan 16 18:03:03.346175 kubelet[2465]: I0116 18:03:03.346142 2465 factory.go:223] Registration of the containerd container factory successfully Jan 16 18:03:03.345000 audit[2480]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2480 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:03:03.345000 audit[2480]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff0e3c670 a2=0 a3=0 items=0 ppid=2465 pid=2480 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:03.345000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 16 18:03:03.349000 audit[2482]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2482 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:03:03.349000 audit[2482]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe4452fe0 a2=0 a3=0 items=0 ppid=2465 pid=2482 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:03.349000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 16 18:03:03.356000 audit[2486]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2486 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:03:03.356000 audit[2486]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffcf774190 a2=0 a3=0 items=0 ppid=2465 pid=2486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:03.356000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 16 18:03:03.358000 audit[2488]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2488 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:03:03.358000 audit[2488]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc6cef020 a2=0 a3=0 items=0 ppid=2465 pid=2488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:03.358000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 16 18:03:03.362199 kubelet[2465]: I0116 18:03:03.362165 2465 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 16 18:03:03.362199 kubelet[2465]: I0116 18:03:03.362181 2465 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 16 18:03:03.362396 kubelet[2465]: I0116 18:03:03.362329 2465 state_mem.go:36] "Initialized new in-memory state store" Jan 16 18:03:03.365282 kubelet[2465]: I0116 18:03:03.365260 2465 policy_none.go:49] "None policy: Start" Jan 16 18:03:03.365454 kubelet[2465]: I0116 18:03:03.365354 2465 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 16 18:03:03.365454 kubelet[2465]: I0116 18:03:03.365369 2465 state_mem.go:35] "Initializing new in-memory state store" Jan 16 18:03:03.372000 audit[2492]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2492 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:03:03.372000 audit[2492]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffe568ac90 a2=0 a3=0 items=0 ppid=2465 pid=2492 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:03.372000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 16 18:03:03.374327 kubelet[2465]: I0116 18:03:03.374276 2465 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 16 18:03:03.374000 audit[2495]: NETFILTER_CFG table=mangle:47 family=2 entries=1 op=nft_register_chain pid=2495 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:03:03.374000 audit[2495]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdcf51bc0 a2=0 a3=0 items=0 ppid=2465 pid=2495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:03.374000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 16 18:03:03.377024 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 16 18:03:03.377000 audit[2494]: NETFILTER_CFG table=mangle:48 family=10 entries=2 op=nft_register_chain pid=2494 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:03:03.377000 audit[2494]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffef942400 a2=0 a3=0 items=0 ppid=2465 pid=2494 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:03.377000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 16 18:03:03.378988 kubelet[2465]: I0116 18:03:03.378925 2465 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 16 18:03:03.378988 kubelet[2465]: I0116 18:03:03.378970 2465 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 16 18:03:03.378988 kubelet[2465]: I0116 18:03:03.378990 2465 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 16 18:03:03.379106 kubelet[2465]: I0116 18:03:03.378998 2465 kubelet.go:2436] "Starting kubelet main sync loop" Jan 16 18:03:03.379106 kubelet[2465]: E0116 18:03:03.379039 2465 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 18:03:03.378000 audit[2496]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2496 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:03:03.378000 audit[2496]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff6c1b280 a2=0 a3=0 items=0 ppid=2465 pid=2496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:03.378000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 16 18:03:03.380240 kubelet[2465]: E0116 18:03:03.380154 2465 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://188.245.199.112:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 188.245.199.112:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 16 18:03:03.380000 audit[2498]: NETFILTER_CFG table=mangle:50 family=10 entries=1 op=nft_register_chain pid=2498 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:03:03.380000 audit[2498]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc5557640 a2=0 a3=0 items=0 ppid=2465 pid=2498 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:03.380000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 16 18:03:03.381000 audit[2499]: NETFILTER_CFG table=filter:51 family=2 entries=1 op=nft_register_chain pid=2499 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:03:03.381000 audit[2499]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd4d5f0b0 a2=0 a3=0 items=0 ppid=2465 pid=2499 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:03.381000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 16 18:03:03.381000 audit[2500]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2500 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:03:03.381000 audit[2500]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcc312390 a2=0 a3=0 items=0 ppid=2465 pid=2500 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:03.381000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 16 18:03:03.382000 audit[2501]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2501 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:03:03.382000 audit[2501]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff2a228b0 a2=0 a3=0 items=0 ppid=2465 pid=2501 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:03.382000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 16 18:03:03.388781 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 16 18:03:03.393882 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 16 18:03:03.405612 kubelet[2465]: E0116 18:03:03.405524 2465 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 16 18:03:03.405968 kubelet[2465]: I0116 18:03:03.405894 2465 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 16 18:03:03.406043 kubelet[2465]: I0116 18:03:03.405931 2465 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 16 18:03:03.408501 kubelet[2465]: I0116 18:03:03.408158 2465 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 18:03:03.412894 kubelet[2465]: E0116 18:03:03.412750 2465 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 16 18:03:03.412894 kubelet[2465]: E0116 18:03:03.412812 2465 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4580-0-0-p-f44e0c3b96\" not found" Jan 16 18:03:03.498563 systemd[1]: Created slice kubepods-burstable-pod2f4b6f62641848800631bb2e9a690e77.slice - libcontainer container kubepods-burstable-pod2f4b6f62641848800631bb2e9a690e77.slice. Jan 16 18:03:03.509934 kubelet[2465]: I0116 18:03:03.509421 2465 kubelet_node_status.go:75] "Attempting to register node" node="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:03.510398 kubelet[2465]: E0116 18:03:03.510350 2465 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://188.245.199.112:6443/api/v1/nodes\": dial tcp 188.245.199.112:6443: connect: connection refused" node="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:03.515356 kubelet[2465]: E0116 18:03:03.515266 2465 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4580-0-0-p-f44e0c3b96\" not found" node="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:03.519937 systemd[1]: Created slice kubepods-burstable-pod7395277834893701c6f0c9db351c0843.slice - libcontainer container kubepods-burstable-pod7395277834893701c6f0c9db351c0843.slice. Jan 16 18:03:03.524240 kubelet[2465]: E0116 18:03:03.524151 2465 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4580-0-0-p-f44e0c3b96\" not found" node="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:03.547259 kubelet[2465]: E0116 18:03:03.547210 2465 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.199.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4580-0-0-p-f44e0c3b96?timeout=10s\": dial tcp 188.245.199.112:6443: connect: connection refused" interval="400ms" Jan 16 18:03:03.547834 systemd[1]: Created slice kubepods-burstable-pod19540ff300aad99c245720a4185f1618.slice - libcontainer container kubepods-burstable-pod19540ff300aad99c245720a4185f1618.slice. Jan 16 18:03:03.550696 kubelet[2465]: E0116 18:03:03.550499 2465 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4580-0-0-p-f44e0c3b96\" not found" node="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:03.644315 kubelet[2465]: I0116 18:03:03.644226 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f4b6f62641848800631bb2e9a690e77-ca-certs\") pod \"kube-apiserver-ci-4580-0-0-p-f44e0c3b96\" (UID: \"2f4b6f62641848800631bb2e9a690e77\") " pod="kube-system/kube-apiserver-ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:03.644522 kubelet[2465]: I0116 18:03:03.644322 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f4b6f62641848800631bb2e9a690e77-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4580-0-0-p-f44e0c3b96\" (UID: \"2f4b6f62641848800631bb2e9a690e77\") " pod="kube-system/kube-apiserver-ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:03.644522 kubelet[2465]: I0116 18:03:03.644395 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7395277834893701c6f0c9db351c0843-flexvolume-dir\") pod \"kube-controller-manager-ci-4580-0-0-p-f44e0c3b96\" (UID: \"7395277834893701c6f0c9db351c0843\") " pod="kube-system/kube-controller-manager-ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:03.644522 kubelet[2465]: I0116 18:03:03.644458 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7395277834893701c6f0c9db351c0843-k8s-certs\") pod \"kube-controller-manager-ci-4580-0-0-p-f44e0c3b96\" (UID: \"7395277834893701c6f0c9db351c0843\") " pod="kube-system/kube-controller-manager-ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:03.644690 kubelet[2465]: I0116 18:03:03.644527 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7395277834893701c6f0c9db351c0843-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4580-0-0-p-f44e0c3b96\" (UID: \"7395277834893701c6f0c9db351c0843\") " pod="kube-system/kube-controller-manager-ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:03.644690 kubelet[2465]: I0116 18:03:03.644571 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/19540ff300aad99c245720a4185f1618-kubeconfig\") pod \"kube-scheduler-ci-4580-0-0-p-f44e0c3b96\" (UID: \"19540ff300aad99c245720a4185f1618\") " pod="kube-system/kube-scheduler-ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:03.644690 kubelet[2465]: I0116 18:03:03.644627 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f4b6f62641848800631bb2e9a690e77-k8s-certs\") pod \"kube-apiserver-ci-4580-0-0-p-f44e0c3b96\" (UID: \"2f4b6f62641848800631bb2e9a690e77\") " pod="kube-system/kube-apiserver-ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:03.644854 kubelet[2465]: I0116 18:03:03.644695 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7395277834893701c6f0c9db351c0843-ca-certs\") pod \"kube-controller-manager-ci-4580-0-0-p-f44e0c3b96\" (UID: \"7395277834893701c6f0c9db351c0843\") " pod="kube-system/kube-controller-manager-ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:03.644854 kubelet[2465]: I0116 18:03:03.644735 2465 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7395277834893701c6f0c9db351c0843-kubeconfig\") pod \"kube-controller-manager-ci-4580-0-0-p-f44e0c3b96\" (UID: \"7395277834893701c6f0c9db351c0843\") " pod="kube-system/kube-controller-manager-ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:03.714004 kubelet[2465]: I0116 18:03:03.713904 2465 kubelet_node_status.go:75] "Attempting to register node" node="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:03.714712 kubelet[2465]: E0116 18:03:03.714641 2465 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://188.245.199.112:6443/api/v1/nodes\": dial tcp 188.245.199.112:6443: connect: connection refused" node="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:03.818250 containerd[1593]: time="2026-01-16T18:03:03.818078917Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4580-0-0-p-f44e0c3b96,Uid:2f4b6f62641848800631bb2e9a690e77,Namespace:kube-system,Attempt:0,}" Jan 16 18:03:03.827475 containerd[1593]: time="2026-01-16T18:03:03.827185114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4580-0-0-p-f44e0c3b96,Uid:7395277834893701c6f0c9db351c0843,Namespace:kube-system,Attempt:0,}" Jan 16 18:03:03.854921 containerd[1593]: time="2026-01-16T18:03:03.854875653Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4580-0-0-p-f44e0c3b96,Uid:19540ff300aad99c245720a4185f1618,Namespace:kube-system,Attempt:0,}" Jan 16 18:03:03.857666 containerd[1593]: time="2026-01-16T18:03:03.857620922Z" level=info msg="connecting to shim e3e6200cd1c7e49c7fcbd1bdbcad0d0961bedd10b66a228ec37e772278656992" address="unix:///run/containerd/s/e662329103899a178e025cd056a1172350b0a678b92e3148111a692aa51a7c83" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:03:03.873384 containerd[1593]: time="2026-01-16T18:03:03.873298907Z" level=info msg="connecting to shim 2e768c5b441a9601002c70c82a11a465e180166069b3f2bb2f08bdbcc905283e" address="unix:///run/containerd/s/5e271199ebe9645c98906e4f359aeeb2c102fa2c6e09d7d1d5a5d0f8bf35d5cd" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:03:03.889221 systemd[1]: Started cri-containerd-e3e6200cd1c7e49c7fcbd1bdbcad0d0961bedd10b66a228ec37e772278656992.scope - libcontainer container e3e6200cd1c7e49c7fcbd1bdbcad0d0961bedd10b66a228ec37e772278656992. Jan 16 18:03:03.909019 containerd[1593]: time="2026-01-16T18:03:03.907637554Z" level=info msg="connecting to shim 0aabc2181524e22daf6a44579ffb2459d5fafd591edb1a3dd32e936e7d14575d" address="unix:///run/containerd/s/1c6bcf964c3e6a14cadfc9e16e4bb402cb7af0c550d84e0d2cc52e044330cf5e" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:03:03.915246 systemd[1]: Started cri-containerd-2e768c5b441a9601002c70c82a11a465e180166069b3f2bb2f08bdbcc905283e.scope - libcontainer container 2e768c5b441a9601002c70c82a11a465e180166069b3f2bb2f08bdbcc905283e. Jan 16 18:03:03.926000 audit: BPF prog-id=81 op=LOAD Jan 16 18:03:03.927000 audit: BPF prog-id=82 op=LOAD Jan 16 18:03:03.927000 audit[2532]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2510 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:03.927000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533653632303063643163376534396337666362643162646263616430 Jan 16 18:03:03.927000 audit: BPF prog-id=82 op=UNLOAD Jan 16 18:03:03.927000 audit[2532]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2510 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:03.927000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533653632303063643163376534396337666362643162646263616430 Jan 16 18:03:03.928000 audit: BPF prog-id=83 op=LOAD Jan 16 18:03:03.928000 audit[2532]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2510 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:03.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533653632303063643163376534396337666362643162646263616430 Jan 16 18:03:03.928000 audit: BPF prog-id=84 op=LOAD Jan 16 18:03:03.928000 audit[2532]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=2510 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:03.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533653632303063643163376534396337666362643162646263616430 Jan 16 18:03:03.928000 audit: BPF prog-id=84 op=UNLOAD Jan 16 18:03:03.928000 audit[2532]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2510 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:03.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533653632303063643163376534396337666362643162646263616430 Jan 16 18:03:03.928000 audit: BPF prog-id=83 op=UNLOAD Jan 16 18:03:03.928000 audit[2532]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2510 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:03.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533653632303063643163376534396337666362643162646263616430 Jan 16 18:03:03.928000 audit: BPF prog-id=85 op=LOAD Jan 16 18:03:03.928000 audit[2532]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=2510 pid=2532 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:03.928000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6533653632303063643163376534396337666362643162646263616430 Jan 16 18:03:03.935000 audit: BPF prog-id=86 op=LOAD Jan 16 18:03:03.937000 audit: BPF prog-id=87 op=LOAD Jan 16 18:03:03.937000 audit[2553]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2530 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:03.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265373638633562343431613936303130303263373063383261313161 Jan 16 18:03:03.937000 audit: BPF prog-id=87 op=UNLOAD Jan 16 18:03:03.937000 audit[2553]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2530 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:03.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265373638633562343431613936303130303263373063383261313161 Jan 16 18:03:03.937000 audit: BPF prog-id=88 op=LOAD Jan 16 18:03:03.937000 audit[2553]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2530 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:03.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265373638633562343431613936303130303263373063383261313161 Jan 16 18:03:03.937000 audit: BPF prog-id=89 op=LOAD Jan 16 18:03:03.937000 audit[2553]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2530 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:03.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265373638633562343431613936303130303263373063383261313161 Jan 16 18:03:03.937000 audit: BPF prog-id=89 op=UNLOAD Jan 16 18:03:03.937000 audit[2553]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2530 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:03.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265373638633562343431613936303130303263373063383261313161 Jan 16 18:03:03.937000 audit: BPF prog-id=88 op=UNLOAD Jan 16 18:03:03.937000 audit[2553]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2530 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:03.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265373638633562343431613936303130303263373063383261313161 Jan 16 18:03:03.938000 audit: BPF prog-id=90 op=LOAD Jan 16 18:03:03.938000 audit[2553]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2530 pid=2553 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:03.938000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265373638633562343431613936303130303263373063383261313161 Jan 16 18:03:03.948641 kubelet[2465]: E0116 18:03:03.948585 2465 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://188.245.199.112:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4580-0-0-p-f44e0c3b96?timeout=10s\": dial tcp 188.245.199.112:6443: connect: connection refused" interval="800ms" Jan 16 18:03:03.950230 systemd[1]: Started cri-containerd-0aabc2181524e22daf6a44579ffb2459d5fafd591edb1a3dd32e936e7d14575d.scope - libcontainer container 0aabc2181524e22daf6a44579ffb2459d5fafd591edb1a3dd32e936e7d14575d. Jan 16 18:03:03.977000 audit: BPF prog-id=91 op=LOAD Jan 16 18:03:03.978000 audit: BPF prog-id=92 op=LOAD Jan 16 18:03:03.978000 audit[2596]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001ec180 a2=98 a3=0 items=0 ppid=2572 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:03.978000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061616263323138313532346532326461663661343435373966666232 Jan 16 18:03:03.979000 audit: BPF prog-id=92 op=UNLOAD Jan 16 18:03:03.979000 audit[2596]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2572 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:03.979000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061616263323138313532346532326461663661343435373966666232 Jan 16 18:03:03.979000 audit: BPF prog-id=93 op=LOAD Jan 16 18:03:03.979000 audit[2596]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001ec3e8 a2=98 a3=0 items=0 ppid=2572 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:03.979000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061616263323138313532346532326461663661343435373966666232 Jan 16 18:03:03.979000 audit: BPF prog-id=94 op=LOAD Jan 16 18:03:03.979000 audit[2596]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001ec168 a2=98 a3=0 items=0 ppid=2572 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:03.979000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061616263323138313532346532326461663661343435373966666232 Jan 16 18:03:03.979000 audit: BPF prog-id=94 op=UNLOAD Jan 16 18:03:03.979000 audit[2596]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2572 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:03.979000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061616263323138313532346532326461663661343435373966666232 Jan 16 18:03:03.980000 audit: BPF prog-id=93 op=UNLOAD Jan 16 18:03:03.980000 audit[2596]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2572 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:03.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061616263323138313532346532326461663661343435373966666232 Jan 16 18:03:03.980000 audit: BPF prog-id=95 op=LOAD Jan 16 18:03:03.980000 audit[2596]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001ec648 a2=98 a3=0 items=0 ppid=2572 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:03.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3061616263323138313532346532326461663661343435373966666232 Jan 16 18:03:03.990498 containerd[1593]: time="2026-01-16T18:03:03.990459070Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4580-0-0-p-f44e0c3b96,Uid:2f4b6f62641848800631bb2e9a690e77,Namespace:kube-system,Attempt:0,} returns sandbox id \"e3e6200cd1c7e49c7fcbd1bdbcad0d0961bedd10b66a228ec37e772278656992\"" Jan 16 18:03:03.998664 containerd[1593]: time="2026-01-16T18:03:03.998619479Z" level=info msg="CreateContainer within sandbox \"e3e6200cd1c7e49c7fcbd1bdbcad0d0961bedd10b66a228ec37e772278656992\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 16 18:03:04.008218 containerd[1593]: time="2026-01-16T18:03:04.008155504Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4580-0-0-p-f44e0c3b96,Uid:7395277834893701c6f0c9db351c0843,Namespace:kube-system,Attempt:0,} returns sandbox id \"2e768c5b441a9601002c70c82a11a465e180166069b3f2bb2f08bdbcc905283e\"" Jan 16 18:03:04.015319 containerd[1593]: time="2026-01-16T18:03:04.015223008Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4580-0-0-p-f44e0c3b96,Uid:19540ff300aad99c245720a4185f1618,Namespace:kube-system,Attempt:0,} returns sandbox id \"0aabc2181524e22daf6a44579ffb2459d5fafd591edb1a3dd32e936e7d14575d\"" Jan 16 18:03:04.018409 containerd[1593]: time="2026-01-16T18:03:04.018254581Z" level=info msg="CreateContainer within sandbox \"2e768c5b441a9601002c70c82a11a465e180166069b3f2bb2f08bdbcc905283e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 16 18:03:04.019428 containerd[1593]: time="2026-01-16T18:03:04.019313909Z" level=info msg="Container 63a967afdbd0e4aff94af53a3576800687429bd880b29d48302a5ff0dceda796: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:03:04.023724 containerd[1593]: time="2026-01-16T18:03:04.023658163Z" level=info msg="CreateContainer within sandbox \"0aabc2181524e22daf6a44579ffb2459d5fafd591edb1a3dd32e936e7d14575d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 16 18:03:04.029525 containerd[1593]: time="2026-01-16T18:03:04.029242870Z" level=info msg="CreateContainer within sandbox \"e3e6200cd1c7e49c7fcbd1bdbcad0d0961bedd10b66a228ec37e772278656992\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"63a967afdbd0e4aff94af53a3576800687429bd880b29d48302a5ff0dceda796\"" Jan 16 18:03:04.029620 containerd[1593]: time="2026-01-16T18:03:04.029571451Z" level=info msg="Container 64985ee81b99061295ed229941ba69597e88ab78d68e33ea9a689d30ed191eba: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:03:04.031649 containerd[1593]: time="2026-01-16T18:03:04.031511185Z" level=info msg="StartContainer for \"63a967afdbd0e4aff94af53a3576800687429bd880b29d48302a5ff0dceda796\"" Jan 16 18:03:04.033073 containerd[1593]: time="2026-01-16T18:03:04.033032456Z" level=info msg="connecting to shim 63a967afdbd0e4aff94af53a3576800687429bd880b29d48302a5ff0dceda796" address="unix:///run/containerd/s/e662329103899a178e025cd056a1172350b0a678b92e3148111a692aa51a7c83" protocol=ttrpc version=3 Jan 16 18:03:04.047992 containerd[1593]: time="2026-01-16T18:03:04.047830430Z" level=info msg="Container 43bea4ee8e5bd76728ad4f24733ecd207f43aa0a4a4fc1b9594969db6db208d2: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:03:04.049609 containerd[1593]: time="2026-01-16T18:03:04.049555888Z" level=info msg="CreateContainer within sandbox \"2e768c5b441a9601002c70c82a11a465e180166069b3f2bb2f08bdbcc905283e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"64985ee81b99061295ed229941ba69597e88ab78d68e33ea9a689d30ed191eba\"" Jan 16 18:03:04.050969 containerd[1593]: time="2026-01-16T18:03:04.050208004Z" level=info msg="StartContainer for \"64985ee81b99061295ed229941ba69597e88ab78d68e33ea9a689d30ed191eba\"" Jan 16 18:03:04.051373 containerd[1593]: time="2026-01-16T18:03:04.051332273Z" level=info msg="connecting to shim 64985ee81b99061295ed229941ba69597e88ab78d68e33ea9a689d30ed191eba" address="unix:///run/containerd/s/5e271199ebe9645c98906e4f359aeeb2c102fa2c6e09d7d1d5a5d0f8bf35d5cd" protocol=ttrpc version=3 Jan 16 18:03:04.060652 containerd[1593]: time="2026-01-16T18:03:04.060607315Z" level=info msg="CreateContainer within sandbox \"0aabc2181524e22daf6a44579ffb2459d5fafd591edb1a3dd32e936e7d14575d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"43bea4ee8e5bd76728ad4f24733ecd207f43aa0a4a4fc1b9594969db6db208d2\"" Jan 16 18:03:04.062753 containerd[1593]: time="2026-01-16T18:03:04.062681292Z" level=info msg="StartContainer for \"43bea4ee8e5bd76728ad4f24733ecd207f43aa0a4a4fc1b9594969db6db208d2\"" Jan 16 18:03:04.063395 systemd[1]: Started cri-containerd-63a967afdbd0e4aff94af53a3576800687429bd880b29d48302a5ff0dceda796.scope - libcontainer container 63a967afdbd0e4aff94af53a3576800687429bd880b29d48302a5ff0dceda796. Jan 16 18:03:04.068596 containerd[1593]: time="2026-01-16T18:03:04.068436476Z" level=info msg="connecting to shim 43bea4ee8e5bd76728ad4f24733ecd207f43aa0a4a4fc1b9594969db6db208d2" address="unix:///run/containerd/s/1c6bcf964c3e6a14cadfc9e16e4bb402cb7af0c550d84e0d2cc52e044330cf5e" protocol=ttrpc version=3 Jan 16 18:03:04.089682 systemd[1]: Started cri-containerd-64985ee81b99061295ed229941ba69597e88ab78d68e33ea9a689d30ed191eba.scope - libcontainer container 64985ee81b99061295ed229941ba69597e88ab78d68e33ea9a689d30ed191eba. Jan 16 18:03:04.098381 systemd[1]: Started cri-containerd-43bea4ee8e5bd76728ad4f24733ecd207f43aa0a4a4fc1b9594969db6db208d2.scope - libcontainer container 43bea4ee8e5bd76728ad4f24733ecd207f43aa0a4a4fc1b9594969db6db208d2. Jan 16 18:03:04.100000 audit: BPF prog-id=96 op=LOAD Jan 16 18:03:04.102000 audit: BPF prog-id=97 op=LOAD Jan 16 18:03:04.102000 audit[2649]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=2510 pid=2649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:04.102000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633613936376166646264306534616666393461663533613335373638 Jan 16 18:03:04.102000 audit: BPF prog-id=97 op=UNLOAD Jan 16 18:03:04.102000 audit[2649]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2510 pid=2649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:04.102000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633613936376166646264306534616666393461663533613335373638 Jan 16 18:03:04.102000 audit: BPF prog-id=98 op=LOAD Jan 16 18:03:04.102000 audit[2649]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=2510 pid=2649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:04.102000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633613936376166646264306534616666393461663533613335373638 Jan 16 18:03:04.103000 audit: BPF prog-id=99 op=LOAD Jan 16 18:03:04.103000 audit[2649]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=2510 pid=2649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:04.103000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633613936376166646264306534616666393461663533613335373638 Jan 16 18:03:04.103000 audit: BPF prog-id=99 op=UNLOAD Jan 16 18:03:04.103000 audit[2649]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2510 pid=2649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:04.103000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633613936376166646264306534616666393461663533613335373638 Jan 16 18:03:04.103000 audit: BPF prog-id=98 op=UNLOAD Jan 16 18:03:04.103000 audit[2649]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2510 pid=2649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:04.103000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633613936376166646264306534616666393461663533613335373638 Jan 16 18:03:04.103000 audit: BPF prog-id=100 op=LOAD Jan 16 18:03:04.103000 audit[2649]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=2510 pid=2649 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:04.103000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3633613936376166646264306534616666393461663533613335373638 Jan 16 18:03:04.112000 audit: BPF prog-id=101 op=LOAD Jan 16 18:03:04.113000 audit: BPF prog-id=102 op=LOAD Jan 16 18:03:04.113000 audit[2660]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c180 a2=98 a3=0 items=0 ppid=2530 pid=2660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:04.113000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634393835656538316239393036313239356564323239393431626136 Jan 16 18:03:04.113000 audit: BPF prog-id=102 op=UNLOAD Jan 16 18:03:04.113000 audit[2660]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2530 pid=2660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:04.113000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634393835656538316239393036313239356564323239393431626136 Jan 16 18:03:04.114000 audit: BPF prog-id=103 op=LOAD Jan 16 18:03:04.114000 audit[2660]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=2530 pid=2660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:04.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634393835656538316239393036313239356564323239393431626136 Jan 16 18:03:04.114000 audit: BPF prog-id=104 op=LOAD Jan 16 18:03:04.114000 audit[2660]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400010c168 a2=98 a3=0 items=0 ppid=2530 pid=2660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:04.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634393835656538316239393036313239356564323239393431626136 Jan 16 18:03:04.114000 audit: BPF prog-id=104 op=UNLOAD Jan 16 18:03:04.114000 audit[2660]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2530 pid=2660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:04.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634393835656538316239393036313239356564323239393431626136 Jan 16 18:03:04.114000 audit: BPF prog-id=103 op=UNLOAD Jan 16 18:03:04.114000 audit[2660]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2530 pid=2660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:04.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634393835656538316239393036313239356564323239393431626136 Jan 16 18:03:04.114000 audit: BPF prog-id=105 op=LOAD Jan 16 18:03:04.114000 audit[2660]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c648 a2=98 a3=0 items=0 ppid=2530 pid=2660 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:04.114000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3634393835656538316239393036313239356564323239393431626136 Jan 16 18:03:04.119447 kubelet[2465]: I0116 18:03:04.119416 2465 kubelet_node_status.go:75] "Attempting to register node" node="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:04.119780 kubelet[2465]: E0116 18:03:04.119754 2465 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://188.245.199.112:6443/api/v1/nodes\": dial tcp 188.245.199.112:6443: connect: connection refused" node="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:04.138000 audit: BPF prog-id=106 op=LOAD Jan 16 18:03:04.141000 audit: BPF prog-id=107 op=LOAD Jan 16 18:03:04.141000 audit[2672]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2572 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:04.141000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433626561346565386535626437363732386164346632343733336563 Jan 16 18:03:04.141000 audit: BPF prog-id=107 op=UNLOAD Jan 16 18:03:04.141000 audit[2672]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2572 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:04.141000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433626561346565386535626437363732386164346632343733336563 Jan 16 18:03:04.143000 audit: BPF prog-id=108 op=LOAD Jan 16 18:03:04.143000 audit[2672]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2572 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:04.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433626561346565386535626437363732386164346632343733336563 Jan 16 18:03:04.143000 audit: BPF prog-id=109 op=LOAD Jan 16 18:03:04.143000 audit[2672]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=2572 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:04.143000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433626561346565386535626437363732386164346632343733336563 Jan 16 18:03:04.144000 audit: BPF prog-id=109 op=UNLOAD Jan 16 18:03:04.144000 audit[2672]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2572 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:04.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433626561346565386535626437363732386164346632343733336563 Jan 16 18:03:04.144000 audit: BPF prog-id=108 op=UNLOAD Jan 16 18:03:04.144000 audit[2672]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2572 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:04.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433626561346565386535626437363732386164346632343733336563 Jan 16 18:03:04.144000 audit: BPF prog-id=110 op=LOAD Jan 16 18:03:04.144000 audit[2672]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=2572 pid=2672 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:04.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433626561346565386535626437363732386164346632343733336563 Jan 16 18:03:04.166520 containerd[1593]: time="2026-01-16T18:03:04.166468814Z" level=info msg="StartContainer for \"63a967afdbd0e4aff94af53a3576800687429bd880b29d48302a5ff0dceda796\" returns successfully" Jan 16 18:03:04.184342 containerd[1593]: time="2026-01-16T18:03:04.184111590Z" level=info msg="StartContainer for \"64985ee81b99061295ed229941ba69597e88ab78d68e33ea9a689d30ed191eba\" returns successfully" Jan 16 18:03:04.204258 containerd[1593]: time="2026-01-16T18:03:04.204223944Z" level=info msg="StartContainer for \"43bea4ee8e5bd76728ad4f24733ecd207f43aa0a4a4fc1b9594969db6db208d2\" returns successfully" Jan 16 18:03:04.387303 kubelet[2465]: E0116 18:03:04.386758 2465 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4580-0-0-p-f44e0c3b96\" not found" node="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:04.395908 kubelet[2465]: E0116 18:03:04.395871 2465 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4580-0-0-p-f44e0c3b96\" not found" node="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:04.396817 kubelet[2465]: E0116 18:03:04.396789 2465 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4580-0-0-p-f44e0c3b96\" not found" node="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:04.923254 kubelet[2465]: I0116 18:03:04.923205 2465 kubelet_node_status.go:75] "Attempting to register node" node="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:04.990069 systemd[1]: Started sshd@8-188.245.199.112:22-210.79.142.221:39814.service - OpenSSH per-connection server daemon (210.79.142.221:39814). Jan 16 18:03:04.992784 kernel: kauditd_printk_skb: 204 callbacks suppressed Jan 16 18:03:04.992812 kernel: audit: type=1130 audit(1768586584.989:381): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-188.245.199.112:22-210.79.142.221:39814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:04.989000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-188.245.199.112:22-210.79.142.221:39814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:05.401110 kubelet[2465]: E0116 18:03:05.400834 2465 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4580-0-0-p-f44e0c3b96\" not found" node="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:05.401651 kubelet[2465]: E0116 18:03:05.401342 2465 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4580-0-0-p-f44e0c3b96\" not found" node="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:06.038159 sshd[2745]: Invalid user myuser from 210.79.142.221 port 39814 Jan 16 18:03:06.230751 sshd[2745]: Received disconnect from 210.79.142.221 port 39814:11: Bye Bye [preauth] Jan 16 18:03:06.230904 sshd[2745]: Disconnected from invalid user myuser 210.79.142.221 port 39814 [preauth] Jan 16 18:03:06.230000 audit[2745]: USER_ERR pid=2745 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=210.79.142.221 addr=210.79.142.221 terminal=ssh res=failed' Jan 16 18:03:06.235978 kernel: audit: type=1109 audit(1768586586.230:382): pid=2745 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=210.79.142.221 addr=210.79.142.221 terminal=ssh res=failed' Jan 16 18:03:06.236441 systemd[1]: sshd@8-188.245.199.112:22-210.79.142.221:39814.service: Deactivated successfully. Jan 16 18:03:06.237000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-188.245.199.112:22-210.79.142.221:39814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:06.242966 kernel: audit: type=1131 audit(1768586586.237:383): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-188.245.199.112:22-210.79.142.221:39814 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:06.420368 kubelet[2465]: E0116 18:03:06.420022 2465 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4580-0-0-p-f44e0c3b96\" not found" node="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:06.444985 kubelet[2465]: E0116 18:03:06.443174 2465 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4580-0-0-p-f44e0c3b96.188b48208de297fe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4580-0-0-p-f44e0c3b96,UID:ci-4580-0-0-p-f44e0c3b96,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4580-0-0-p-f44e0c3b96,},FirstTimestamp:2026-01-16 18:03:03.329642494 +0000 UTC m=+1.931372699,LastTimestamp:2026-01-16 18:03:03.329642494 +0000 UTC m=+1.931372699,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4580-0-0-p-f44e0c3b96,}" Jan 16 18:03:06.470761 kubelet[2465]: I0116 18:03:06.470445 2465 kubelet_node_status.go:78] "Successfully registered node" node="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:06.545809 kubelet[2465]: I0116 18:03:06.545773 2465 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:06.558615 kubelet[2465]: E0116 18:03:06.558570 2465 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4580-0-0-p-f44e0c3b96\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:06.558615 kubelet[2465]: I0116 18:03:06.558606 2465 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:06.564168 kubelet[2465]: E0116 18:03:06.564116 2465 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4580-0-0-p-f44e0c3b96\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:06.564168 kubelet[2465]: I0116 18:03:06.564159 2465 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:06.569686 kubelet[2465]: E0116 18:03:06.569638 2465 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4580-0-0-p-f44e0c3b96\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:07.322080 kubelet[2465]: I0116 18:03:07.322037 2465 apiserver.go:52] "Watching apiserver" Jan 16 18:03:07.343044 kubelet[2465]: I0116 18:03:07.342915 2465 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 16 18:03:08.354999 systemd[1]: Reload requested from client PID 2754 ('systemctl') (unit session-8.scope)... Jan 16 18:03:08.355017 systemd[1]: Reloading... Jan 16 18:03:08.483068 zram_generator::config[2810]: No configuration found. Jan 16 18:03:08.694819 systemd[1]: Reloading finished in 339 ms. Jan 16 18:03:08.730175 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 18:03:08.747758 systemd[1]: kubelet.service: Deactivated successfully. Jan 16 18:03:08.749338 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 18:03:08.748000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:08.749443 systemd[1]: kubelet.service: Consumed 2.393s CPU time, 125M memory peak. Jan 16 18:03:08.753059 kernel: audit: type=1131 audit(1768586588.748:384): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:08.756966 kernel: audit: type=1334 audit(1768586588.755:385): prog-id=111 op=LOAD Jan 16 18:03:08.757056 kernel: audit: type=1334 audit(1768586588.755:386): prog-id=64 op=UNLOAD Jan 16 18:03:08.755000 audit: BPF prog-id=111 op=LOAD Jan 16 18:03:08.755000 audit: BPF prog-id=64 op=UNLOAD Jan 16 18:03:08.755088 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 18:03:08.759458 kernel: audit: type=1334 audit(1768586588.756:387): prog-id=112 op=LOAD Jan 16 18:03:08.759540 kernel: audit: type=1334 audit(1768586588.756:388): prog-id=65 op=UNLOAD Jan 16 18:03:08.759558 kernel: audit: type=1334 audit(1768586588.756:389): prog-id=113 op=LOAD Jan 16 18:03:08.756000 audit: BPF prog-id=112 op=LOAD Jan 16 18:03:08.756000 audit: BPF prog-id=65 op=UNLOAD Jan 16 18:03:08.756000 audit: BPF prog-id=113 op=LOAD Jan 16 18:03:08.756000 audit: BPF prog-id=114 op=LOAD Jan 16 18:03:08.756000 audit: BPF prog-id=66 op=UNLOAD Jan 16 18:03:08.756000 audit: BPF prog-id=67 op=UNLOAD Jan 16 18:03:08.757000 audit: BPF prog-id=115 op=LOAD Jan 16 18:03:08.757000 audit: BPF prog-id=116 op=LOAD Jan 16 18:03:08.758000 audit: BPF prog-id=68 op=UNLOAD Jan 16 18:03:08.758000 audit: BPF prog-id=69 op=UNLOAD Jan 16 18:03:08.758000 audit: BPF prog-id=117 op=LOAD Jan 16 18:03:08.758000 audit: BPF prog-id=78 op=UNLOAD Jan 16 18:03:08.759000 audit: BPF prog-id=118 op=LOAD Jan 16 18:03:08.759000 audit: BPF prog-id=119 op=LOAD Jan 16 18:03:08.759000 audit: BPF prog-id=79 op=UNLOAD Jan 16 18:03:08.759000 audit: BPF prog-id=80 op=UNLOAD Jan 16 18:03:08.759000 audit: BPF prog-id=120 op=LOAD Jan 16 18:03:08.759000 audit: BPF prog-id=71 op=UNLOAD Jan 16 18:03:08.761967 kernel: audit: type=1334 audit(1768586588.756:390): prog-id=114 op=LOAD Jan 16 18:03:08.759000 audit: BPF prog-id=121 op=LOAD Jan 16 18:03:08.760000 audit: BPF prog-id=122 op=LOAD Jan 16 18:03:08.760000 audit: BPF prog-id=72 op=UNLOAD Jan 16 18:03:08.760000 audit: BPF prog-id=73 op=UNLOAD Jan 16 18:03:08.761000 audit: BPF prog-id=123 op=LOAD Jan 16 18:03:08.761000 audit: BPF prog-id=75 op=UNLOAD Jan 16 18:03:08.761000 audit: BPF prog-id=124 op=LOAD Jan 16 18:03:08.761000 audit: BPF prog-id=125 op=LOAD Jan 16 18:03:08.761000 audit: BPF prog-id=76 op=UNLOAD Jan 16 18:03:08.761000 audit: BPF prog-id=77 op=UNLOAD Jan 16 18:03:08.762000 audit: BPF prog-id=126 op=LOAD Jan 16 18:03:08.762000 audit: BPF prog-id=74 op=UNLOAD Jan 16 18:03:08.762000 audit: BPF prog-id=127 op=LOAD Jan 16 18:03:08.762000 audit: BPF prog-id=61 op=UNLOAD Jan 16 18:03:08.763000 audit: BPF prog-id=128 op=LOAD Jan 16 18:03:08.763000 audit: BPF prog-id=129 op=LOAD Jan 16 18:03:08.763000 audit: BPF prog-id=62 op=UNLOAD Jan 16 18:03:08.763000 audit: BPF prog-id=63 op=UNLOAD Jan 16 18:03:08.764000 audit: BPF prog-id=130 op=LOAD Jan 16 18:03:08.764000 audit: BPF prog-id=70 op=UNLOAD Jan 16 18:03:08.948091 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 18:03:08.946000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:08.961509 (kubelet)[2847]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 18:03:09.009306 kubelet[2847]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 18:03:09.009648 kubelet[2847]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 16 18:03:09.009687 kubelet[2847]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 18:03:09.009803 kubelet[2847]: I0116 18:03:09.009776 2847 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 18:03:09.023774 kubelet[2847]: I0116 18:03:09.023732 2847 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 16 18:03:09.024148 kubelet[2847]: I0116 18:03:09.023856 2847 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 18:03:09.024423 kubelet[2847]: I0116 18:03:09.024410 2847 server.go:956] "Client rotation is on, will bootstrap in background" Jan 16 18:03:09.026400 kubelet[2847]: I0116 18:03:09.026333 2847 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 16 18:03:09.037265 kubelet[2847]: I0116 18:03:09.037224 2847 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 18:03:09.046017 kubelet[2847]: I0116 18:03:09.045310 2847 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 16 18:03:09.051928 kubelet[2847]: I0116 18:03:09.051903 2847 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 18:03:09.052372 kubelet[2847]: I0116 18:03:09.052346 2847 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 18:03:09.052601 kubelet[2847]: I0116 18:03:09.052434 2847 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4580-0-0-p-f44e0c3b96","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 16 18:03:09.052817 kubelet[2847]: I0116 18:03:09.052734 2847 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 18:03:09.052817 kubelet[2847]: I0116 18:03:09.052746 2847 container_manager_linux.go:303] "Creating device plugin manager" Jan 16 18:03:09.052933 kubelet[2847]: I0116 18:03:09.052923 2847 state_mem.go:36] "Initialized new in-memory state store" Jan 16 18:03:09.053264 kubelet[2847]: I0116 18:03:09.053216 2847 kubelet.go:480] "Attempting to sync node with API server" Jan 16 18:03:09.053264 kubelet[2847]: I0116 18:03:09.053232 2847 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 18:03:09.053830 kubelet[2847]: I0116 18:03:09.053751 2847 kubelet.go:386] "Adding apiserver pod source" Jan 16 18:03:09.053830 kubelet[2847]: I0116 18:03:09.053778 2847 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 18:03:09.056967 kubelet[2847]: I0116 18:03:09.056453 2847 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 16 18:03:09.057201 kubelet[2847]: I0116 18:03:09.057187 2847 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 16 18:03:09.061486 kubelet[2847]: I0116 18:03:09.061470 2847 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 16 18:03:09.061627 kubelet[2847]: I0116 18:03:09.061616 2847 server.go:1289] "Started kubelet" Jan 16 18:03:09.067100 kubelet[2847]: I0116 18:03:09.067081 2847 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 18:03:09.077035 kubelet[2847]: I0116 18:03:09.077000 2847 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 18:03:09.077841 kubelet[2847]: I0116 18:03:09.077795 2847 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 16 18:03:09.078286 kubelet[2847]: E0116 18:03:09.078268 2847 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4580-0-0-p-f44e0c3b96\" not found" Jan 16 18:03:09.078688 kubelet[2847]: I0116 18:03:09.078675 2847 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 16 18:03:09.078933 kubelet[2847]: I0116 18:03:09.078914 2847 reconciler.go:26] "Reconciler: start to sync state" Jan 16 18:03:09.090972 kubelet[2847]: I0116 18:03:09.079082 2847 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 18:03:09.090972 kubelet[2847]: I0116 18:03:09.090269 2847 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 18:03:09.090972 kubelet[2847]: I0116 18:03:09.087788 2847 server.go:317] "Adding debug handlers to kubelet server" Jan 16 18:03:09.094713 kubelet[2847]: I0116 18:03:09.094665 2847 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 16 18:03:09.096045 kubelet[2847]: I0116 18:03:09.096020 2847 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 16 18:03:09.096187 kubelet[2847]: I0116 18:03:09.096177 2847 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 16 18:03:09.096268 kubelet[2847]: I0116 18:03:09.096259 2847 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 16 18:03:09.096332 kubelet[2847]: I0116 18:03:09.096323 2847 kubelet.go:2436] "Starting kubelet main sync loop" Jan 16 18:03:09.096429 kubelet[2847]: E0116 18:03:09.096409 2847 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 18:03:09.097906 kubelet[2847]: I0116 18:03:09.097837 2847 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 16 18:03:09.110163 kubelet[2847]: I0116 18:03:09.108741 2847 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 18:03:09.114844 kubelet[2847]: E0116 18:03:09.114810 2847 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 18:03:09.117847 kubelet[2847]: I0116 18:03:09.117826 2847 factory.go:223] Registration of the containerd container factory successfully Jan 16 18:03:09.118031 kubelet[2847]: I0116 18:03:09.118020 2847 factory.go:223] Registration of the systemd container factory successfully Jan 16 18:03:09.177452 kubelet[2847]: I0116 18:03:09.177426 2847 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 16 18:03:09.177763 kubelet[2847]: I0116 18:03:09.177746 2847 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 16 18:03:09.177858 kubelet[2847]: I0116 18:03:09.177848 2847 state_mem.go:36] "Initialized new in-memory state store" Jan 16 18:03:09.178101 kubelet[2847]: I0116 18:03:09.178081 2847 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 16 18:03:09.178189 kubelet[2847]: I0116 18:03:09.178167 2847 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 16 18:03:09.178241 kubelet[2847]: I0116 18:03:09.178234 2847 policy_none.go:49] "None policy: Start" Jan 16 18:03:09.178295 kubelet[2847]: I0116 18:03:09.178287 2847 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 16 18:03:09.178350 kubelet[2847]: I0116 18:03:09.178343 2847 state_mem.go:35] "Initializing new in-memory state store" Jan 16 18:03:09.178505 kubelet[2847]: I0116 18:03:09.178495 2847 state_mem.go:75] "Updated machine memory state" Jan 16 18:03:09.185346 kubelet[2847]: E0116 18:03:09.185313 2847 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 16 18:03:09.185678 kubelet[2847]: I0116 18:03:09.185659 2847 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 16 18:03:09.186364 kubelet[2847]: I0116 18:03:09.186319 2847 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 16 18:03:09.186657 kubelet[2847]: I0116 18:03:09.186616 2847 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 18:03:09.189992 kubelet[2847]: E0116 18:03:09.189516 2847 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 16 18:03:09.197608 kubelet[2847]: I0116 18:03:09.197547 2847 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:09.198389 kubelet[2847]: I0116 18:03:09.198249 2847 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:09.200316 kubelet[2847]: I0116 18:03:09.199052 2847 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:09.290535 kubelet[2847]: I0116 18:03:09.290484 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/7395277834893701c6f0c9db351c0843-kubeconfig\") pod \"kube-controller-manager-ci-4580-0-0-p-f44e0c3b96\" (UID: \"7395277834893701c6f0c9db351c0843\") " pod="kube-system/kube-controller-manager-ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:09.291005 kubelet[2847]: I0116 18:03:09.290687 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/7395277834893701c6f0c9db351c0843-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4580-0-0-p-f44e0c3b96\" (UID: \"7395277834893701c6f0c9db351c0843\") " pod="kube-system/kube-controller-manager-ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:09.291005 kubelet[2847]: I0116 18:03:09.290740 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/19540ff300aad99c245720a4185f1618-kubeconfig\") pod \"kube-scheduler-ci-4580-0-0-p-f44e0c3b96\" (UID: \"19540ff300aad99c245720a4185f1618\") " pod="kube-system/kube-scheduler-ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:09.291005 kubelet[2847]: I0116 18:03:09.290775 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/7395277834893701c6f0c9db351c0843-ca-certs\") pod \"kube-controller-manager-ci-4580-0-0-p-f44e0c3b96\" (UID: \"7395277834893701c6f0c9db351c0843\") " pod="kube-system/kube-controller-manager-ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:09.291005 kubelet[2847]: I0116 18:03:09.290807 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/7395277834893701c6f0c9db351c0843-flexvolume-dir\") pod \"kube-controller-manager-ci-4580-0-0-p-f44e0c3b96\" (UID: \"7395277834893701c6f0c9db351c0843\") " pod="kube-system/kube-controller-manager-ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:09.291414 kubelet[2847]: I0116 18:03:09.291227 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/2f4b6f62641848800631bb2e9a690e77-ca-certs\") pod \"kube-apiserver-ci-4580-0-0-p-f44e0c3b96\" (UID: \"2f4b6f62641848800631bb2e9a690e77\") " pod="kube-system/kube-apiserver-ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:09.291414 kubelet[2847]: I0116 18:03:09.291294 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/2f4b6f62641848800631bb2e9a690e77-k8s-certs\") pod \"kube-apiserver-ci-4580-0-0-p-f44e0c3b96\" (UID: \"2f4b6f62641848800631bb2e9a690e77\") " pod="kube-system/kube-apiserver-ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:09.291414 kubelet[2847]: I0116 18:03:09.291331 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/2f4b6f62641848800631bb2e9a690e77-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4580-0-0-p-f44e0c3b96\" (UID: \"2f4b6f62641848800631bb2e9a690e77\") " pod="kube-system/kube-apiserver-ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:09.291414 kubelet[2847]: I0116 18:03:09.291361 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/7395277834893701c6f0c9db351c0843-k8s-certs\") pod \"kube-controller-manager-ci-4580-0-0-p-f44e0c3b96\" (UID: \"7395277834893701c6f0c9db351c0843\") " pod="kube-system/kube-controller-manager-ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:09.297773 kubelet[2847]: I0116 18:03:09.297735 2847 kubelet_node_status.go:75] "Attempting to register node" node="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:09.311369 kubelet[2847]: I0116 18:03:09.311330 2847 kubelet_node_status.go:124] "Node was previously registered" node="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:09.311534 kubelet[2847]: I0116 18:03:09.311437 2847 kubelet_node_status.go:78] "Successfully registered node" node="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:10.063485 kubelet[2847]: I0116 18:03:10.063453 2847 apiserver.go:52] "Watching apiserver" Jan 16 18:03:10.079836 kubelet[2847]: I0116 18:03:10.079771 2847 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 16 18:03:10.158401 kubelet[2847]: I0116 18:03:10.158346 2847 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:10.159176 kubelet[2847]: I0116 18:03:10.159139 2847 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:10.176543 kubelet[2847]: E0116 18:03:10.175994 2847 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4580-0-0-p-f44e0c3b96\" already exists" pod="kube-system/kube-scheduler-ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:10.176543 kubelet[2847]: E0116 18:03:10.176420 2847 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4580-0-0-p-f44e0c3b96\" already exists" pod="kube-system/kube-controller-manager-ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:10.211549 kubelet[2847]: I0116 18:03:10.211390 2847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4580-0-0-p-f44e0c3b96" podStartSLOduration=1.211370374 podStartE2EDuration="1.211370374s" podCreationTimestamp="2026-01-16 18:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 18:03:10.191992305 +0000 UTC m=+1.223912636" watchObservedRunningTime="2026-01-16 18:03:10.211370374 +0000 UTC m=+1.243290705" Jan 16 18:03:10.227559 kubelet[2847]: I0116 18:03:10.227450 2847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4580-0-0-p-f44e0c3b96" podStartSLOduration=1.227430973 podStartE2EDuration="1.227430973s" podCreationTimestamp="2026-01-16 18:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 18:03:10.212230618 +0000 UTC m=+1.244150989" watchObservedRunningTime="2026-01-16 18:03:10.227430973 +0000 UTC m=+1.259351344" Jan 16 18:03:10.243225 kubelet[2847]: I0116 18:03:10.243164 2847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4580-0-0-p-f44e0c3b96" podStartSLOduration=1.2431401499999999 podStartE2EDuration="1.24314015s" podCreationTimestamp="2026-01-16 18:03:09 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 18:03:10.228244552 +0000 UTC m=+1.260164883" watchObservedRunningTime="2026-01-16 18:03:10.24314015 +0000 UTC m=+1.275060481" Jan 16 18:03:13.618576 kubelet[2847]: I0116 18:03:13.618512 2847 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 16 18:03:13.620933 kubelet[2847]: I0116 18:03:13.619209 2847 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 16 18:03:13.621024 containerd[1593]: time="2026-01-16T18:03:13.618912200Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 16 18:03:14.222233 systemd[1]: Created slice kubepods-besteffort-pod2173a052_337d_4505_83a1_40f5453d41b9.slice - libcontainer container kubepods-besteffort-pod2173a052_337d_4505_83a1_40f5453d41b9.slice. Jan 16 18:03:14.327335 kubelet[2847]: I0116 18:03:14.327135 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/2173a052-337d-4505-83a1-40f5453d41b9-lib-modules\") pod \"kube-proxy-wv2fz\" (UID: \"2173a052-337d-4505-83a1-40f5453d41b9\") " pod="kube-system/kube-proxy-wv2fz" Jan 16 18:03:14.327335 kubelet[2847]: I0116 18:03:14.327219 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/2173a052-337d-4505-83a1-40f5453d41b9-kube-proxy\") pod \"kube-proxy-wv2fz\" (UID: \"2173a052-337d-4505-83a1-40f5453d41b9\") " pod="kube-system/kube-proxy-wv2fz" Jan 16 18:03:14.327335 kubelet[2847]: I0116 18:03:14.327243 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/2173a052-337d-4505-83a1-40f5453d41b9-xtables-lock\") pod \"kube-proxy-wv2fz\" (UID: \"2173a052-337d-4505-83a1-40f5453d41b9\") " pod="kube-system/kube-proxy-wv2fz" Jan 16 18:03:14.327335 kubelet[2847]: I0116 18:03:14.327262 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pcflf\" (UniqueName: \"kubernetes.io/projected/2173a052-337d-4505-83a1-40f5453d41b9-kube-api-access-pcflf\") pod \"kube-proxy-wv2fz\" (UID: \"2173a052-337d-4505-83a1-40f5453d41b9\") " pod="kube-system/kube-proxy-wv2fz" Jan 16 18:03:14.532735 containerd[1593]: time="2026-01-16T18:03:14.532430370Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wv2fz,Uid:2173a052-337d-4505-83a1-40f5453d41b9,Namespace:kube-system,Attempt:0,}" Jan 16 18:03:14.558980 containerd[1593]: time="2026-01-16T18:03:14.557990501Z" level=info msg="connecting to shim 98d1bcfc871dbf32da1e0a4c084c871968a90c71e381be3df341ea89855f7b96" address="unix:///run/containerd/s/22e5da49bb2056c9c4ac19cc2d1f203f9a763c11a35368aaa0b2b7d7ac813752" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:03:14.590181 systemd[1]: Started cri-containerd-98d1bcfc871dbf32da1e0a4c084c871968a90c71e381be3df341ea89855f7b96.scope - libcontainer container 98d1bcfc871dbf32da1e0a4c084c871968a90c71e381be3df341ea89855f7b96. Jan 16 18:03:14.602000 audit: BPF prog-id=131 op=LOAD Jan 16 18:03:14.604164 kernel: kauditd_printk_skb: 35 callbacks suppressed Jan 16 18:03:14.604212 kernel: audit: type=1334 audit(1768586594.602:426): prog-id=131 op=LOAD Jan 16 18:03:14.603000 audit: BPF prog-id=132 op=LOAD Jan 16 18:03:14.605266 kernel: audit: type=1334 audit(1768586594.603:427): prog-id=132 op=LOAD Jan 16 18:03:14.605298 kernel: audit: type=1300 audit(1768586594.603:427): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=2905 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:14.603000 audit[2916]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=2905 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:14.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938643162636663383731646266333264613165306134633038346338 Jan 16 18:03:14.609642 kernel: audit: type=1327 audit(1768586594.603:427): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938643162636663383731646266333264613165306134633038346338 Jan 16 18:03:14.609704 kernel: audit: type=1334 audit(1768586594.603:428): prog-id=132 op=UNLOAD Jan 16 18:03:14.603000 audit: BPF prog-id=132 op=UNLOAD Jan 16 18:03:14.612151 kernel: audit: type=1300 audit(1768586594.603:428): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2905 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:14.612197 kernel: audit: type=1327 audit(1768586594.603:428): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938643162636663383731646266333264613165306134633038346338 Jan 16 18:03:14.603000 audit[2916]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2905 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:14.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938643162636663383731646266333264613165306134633038346338 Jan 16 18:03:14.615033 kernel: audit: type=1334 audit(1768586594.603:429): prog-id=133 op=LOAD Jan 16 18:03:14.615093 kernel: audit: type=1300 audit(1768586594.603:429): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=2905 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:14.603000 audit: BPF prog-id=133 op=LOAD Jan 16 18:03:14.603000 audit[2916]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=2905 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:14.620155 kernel: audit: type=1327 audit(1768586594.603:429): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938643162636663383731646266333264613165306134633038346338 Jan 16 18:03:14.603000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938643162636663383731646266333264613165306134633038346338 Jan 16 18:03:14.606000 audit: BPF prog-id=134 op=LOAD Jan 16 18:03:14.606000 audit[2916]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=2905 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:14.606000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938643162636663383731646266333264613165306134633038346338 Jan 16 18:03:14.611000 audit: BPF prog-id=134 op=UNLOAD Jan 16 18:03:14.611000 audit[2916]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2905 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:14.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938643162636663383731646266333264613165306134633038346338 Jan 16 18:03:14.611000 audit: BPF prog-id=133 op=UNLOAD Jan 16 18:03:14.611000 audit[2916]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2905 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:14.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938643162636663383731646266333264613165306134633038346338 Jan 16 18:03:14.611000 audit: BPF prog-id=135 op=LOAD Jan 16 18:03:14.611000 audit[2916]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=2905 pid=2916 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:14.611000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3938643162636663383731646266333264613165306134633038346338 Jan 16 18:03:14.645491 containerd[1593]: time="2026-01-16T18:03:14.645418043Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-wv2fz,Uid:2173a052-337d-4505-83a1-40f5453d41b9,Namespace:kube-system,Attempt:0,} returns sandbox id \"98d1bcfc871dbf32da1e0a4c084c871968a90c71e381be3df341ea89855f7b96\"" Jan 16 18:03:14.653305 containerd[1593]: time="2026-01-16T18:03:14.653230982Z" level=info msg="CreateContainer within sandbox \"98d1bcfc871dbf32da1e0a4c084c871968a90c71e381be3df341ea89855f7b96\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 16 18:03:14.668982 containerd[1593]: time="2026-01-16T18:03:14.667489537Z" level=info msg="Container f776a44a3391ea73a939bd668736dae8093a998babeaa72a5dafb70f7176459a: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:03:14.685498 containerd[1593]: time="2026-01-16T18:03:14.685361461Z" level=info msg="CreateContainer within sandbox \"98d1bcfc871dbf32da1e0a4c084c871968a90c71e381be3df341ea89855f7b96\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"f776a44a3391ea73a939bd668736dae8093a998babeaa72a5dafb70f7176459a\"" Jan 16 18:03:14.686429 containerd[1593]: time="2026-01-16T18:03:14.686374358Z" level=info msg="StartContainer for \"f776a44a3391ea73a939bd668736dae8093a998babeaa72a5dafb70f7176459a\"" Jan 16 18:03:14.691152 containerd[1593]: time="2026-01-16T18:03:14.691068772Z" level=info msg="connecting to shim f776a44a3391ea73a939bd668736dae8093a998babeaa72a5dafb70f7176459a" address="unix:///run/containerd/s/22e5da49bb2056c9c4ac19cc2d1f203f9a763c11a35368aaa0b2b7d7ac813752" protocol=ttrpc version=3 Jan 16 18:03:14.722205 systemd[1]: Started cri-containerd-f776a44a3391ea73a939bd668736dae8093a998babeaa72a5dafb70f7176459a.scope - libcontainer container f776a44a3391ea73a939bd668736dae8093a998babeaa72a5dafb70f7176459a. Jan 16 18:03:14.772000 audit: BPF prog-id=136 op=LOAD Jan 16 18:03:14.772000 audit[2940]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001863e8 a2=98 a3=0 items=0 ppid=2905 pid=2940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:14.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637373661343461333339316561373361393339626436363837333664 Jan 16 18:03:14.772000 audit: BPF prog-id=137 op=LOAD Jan 16 18:03:14.772000 audit[2940]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000186168 a2=98 a3=0 items=0 ppid=2905 pid=2940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:14.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637373661343461333339316561373361393339626436363837333664 Jan 16 18:03:14.772000 audit: BPF prog-id=137 op=UNLOAD Jan 16 18:03:14.772000 audit[2940]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2905 pid=2940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:14.772000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637373661343461333339316561373361393339626436363837333664 Jan 16 18:03:14.773000 audit: BPF prog-id=136 op=UNLOAD Jan 16 18:03:14.773000 audit[2940]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2905 pid=2940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:14.773000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637373661343461333339316561373361393339626436363837333664 Jan 16 18:03:14.773000 audit: BPF prog-id=138 op=LOAD Jan 16 18:03:14.773000 audit[2940]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000186648 a2=98 a3=0 items=0 ppid=2905 pid=2940 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:14.773000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6637373661343461333339316561373361393339626436363837333664 Jan 16 18:03:14.826236 containerd[1593]: time="2026-01-16T18:03:14.826106130Z" level=info msg="StartContainer for \"f776a44a3391ea73a939bd668736dae8093a998babeaa72a5dafb70f7176459a\" returns successfully" Jan 16 18:03:14.857819 systemd[1]: Created slice kubepods-besteffort-podab315de5_f7a7_4d88_96ff_0c63dbbb7157.slice - libcontainer container kubepods-besteffort-podab315de5_f7a7_4d88_96ff_0c63dbbb7157.slice. Jan 16 18:03:14.931447 kubelet[2847]: I0116 18:03:14.931304 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2vqhs\" (UniqueName: \"kubernetes.io/projected/ab315de5-f7a7-4d88-96ff-0c63dbbb7157-kube-api-access-2vqhs\") pod \"tigera-operator-7dcd859c48-p52pb\" (UID: \"ab315de5-f7a7-4d88-96ff-0c63dbbb7157\") " pod="tigera-operator/tigera-operator-7dcd859c48-p52pb" Jan 16 18:03:14.931447 kubelet[2847]: I0116 18:03:14.931368 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/ab315de5-f7a7-4d88-96ff-0c63dbbb7157-var-lib-calico\") pod \"tigera-operator-7dcd859c48-p52pb\" (UID: \"ab315de5-f7a7-4d88-96ff-0c63dbbb7157\") " pod="tigera-operator/tigera-operator-7dcd859c48-p52pb" Jan 16 18:03:15.041000 audit[3005]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=3005 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:03:15.041000 audit[3005]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffc05aff0 a2=0 a3=1 items=0 ppid=2953 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.041000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 16 18:03:15.044000 audit[3009]: NETFILTER_CFG table=nat:55 family=2 entries=1 op=nft_register_chain pid=3009 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:03:15.044000 audit[3009]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff37c3840 a2=0 a3=1 items=0 ppid=2953 pid=3009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.044000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 16 18:03:15.044000 audit[3006]: NETFILTER_CFG table=mangle:56 family=10 entries=1 op=nft_register_chain pid=3006 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:03:15.044000 audit[3006]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffa7db120 a2=0 a3=1 items=0 ppid=2953 pid=3006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.044000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 16 18:03:15.052000 audit[3011]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=3011 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:03:15.052000 audit[3011]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff581cde0 a2=0 a3=1 items=0 ppid=2953 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.052000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 16 18:03:15.055000 audit[3013]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_chain pid=3013 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:03:15.055000 audit[3013]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd14a5900 a2=0 a3=1 items=0 ppid=2953 pid=3013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.055000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 16 18:03:15.058000 audit[3018]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3018 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:03:15.058000 audit[3018]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffebb89250 a2=0 a3=1 items=0 ppid=2953 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.058000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 16 18:03:15.152000 audit[3019]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:03:15.152000 audit[3019]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=fffffd63a160 a2=0 a3=1 items=0 ppid=2953 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.152000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 16 18:03:15.156000 audit[3021]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:03:15.156000 audit[3021]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffffadefff0 a2=0 a3=1 items=0 ppid=2953 pid=3021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.156000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 16 18:03:15.162385 containerd[1593]: time="2026-01-16T18:03:15.162238644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-p52pb,Uid:ab315de5-f7a7-4d88-96ff-0c63dbbb7157,Namespace:tigera-operator,Attempt:0,}" Jan 16 18:03:15.161000 audit[3024]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3024 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:03:15.161000 audit[3024]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffdec5fc50 a2=0 a3=1 items=0 ppid=2953 pid=3024 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.161000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 16 18:03:15.164000 audit[3025]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:03:15.164000 audit[3025]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc1baf1f0 a2=0 a3=1 items=0 ppid=2953 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.164000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 16 18:03:15.173000 audit[3027]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3027 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:03:15.173000 audit[3027]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd29560f0 a2=0 a3=1 items=0 ppid=2953 pid=3027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.173000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 16 18:03:15.175000 audit[3028]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:03:15.175000 audit[3028]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdf446550 a2=0 a3=1 items=0 ppid=2953 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.175000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 16 18:03:15.192000 audit[3031]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:03:15.192000 audit[3031]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffdc173420 a2=0 a3=1 items=0 ppid=2953 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.192000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 16 18:03:15.200846 containerd[1593]: time="2026-01-16T18:03:15.200729458Z" level=info msg="connecting to shim a3041452598c3679762f923a91d83ba16d120245ae65758dbf7dd67f1135865b" address="unix:///run/containerd/s/db9174735d8d8f33c2a87d093df4979ac72d0ab4f41557706c5c59e97d875bc2" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:03:15.206000 audit[3052]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:03:15.206000 audit[3052]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe7799180 a2=0 a3=1 items=0 ppid=2953 pid=3052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.206000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 16 18:03:15.208000 audit[3053]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3053 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:03:15.208000 audit[3053]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc4dafaf0 a2=0 a3=1 items=0 ppid=2953 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.208000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 16 18:03:15.214000 audit[3062]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3062 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:03:15.214000 audit[3062]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffdeff8e80 a2=0 a3=1 items=0 ppid=2953 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.214000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 16 18:03:15.216000 audit[3063]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3063 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:03:15.216000 audit[3063]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcec315a0 a2=0 a3=1 items=0 ppid=2953 pid=3063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.216000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 16 18:03:15.220000 audit[3070]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3070 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:03:15.220000 audit[3070]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffebc35190 a2=0 a3=1 items=0 ppid=2953 pid=3070 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.220000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 16 18:03:15.226000 audit[3073]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3073 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:03:15.226000 audit[3073]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffb90e300 a2=0 a3=1 items=0 ppid=2953 pid=3073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.226000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 16 18:03:15.231208 systemd[1]: Started cri-containerd-a3041452598c3679762f923a91d83ba16d120245ae65758dbf7dd67f1135865b.scope - libcontainer container a3041452598c3679762f923a91d83ba16d120245ae65758dbf7dd67f1135865b. Jan 16 18:03:15.238000 audit[3076]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3076 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:03:15.238000 audit[3076]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff252ecd0 a2=0 a3=1 items=0 ppid=2953 pid=3076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.238000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 16 18:03:15.241000 audit[3079]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3079 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:03:15.241000 audit[3079]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff1b49010 a2=0 a3=1 items=0 ppid=2953 pid=3079 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.241000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 16 18:03:15.246000 audit[3086]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3086 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:03:15.246000 audit[3086]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffcd144140 a2=0 a3=1 items=0 ppid=2953 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.246000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 16 18:03:15.250000 audit: BPF prog-id=139 op=LOAD Jan 16 18:03:15.253000 audit: BPF prog-id=140 op=LOAD Jan 16 18:03:15.253000 audit[3055]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3040 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133303431343532353938633336373937363266393233613931643833 Jan 16 18:03:15.253000 audit: BPF prog-id=140 op=UNLOAD Jan 16 18:03:15.253000 audit[3055]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3040 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133303431343532353938633336373937363266393233613931643833 Jan 16 18:03:15.253000 audit: BPF prog-id=141 op=LOAD Jan 16 18:03:15.253000 audit[3055]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3040 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133303431343532353938633336373937363266393233613931643833 Jan 16 18:03:15.253000 audit: BPF prog-id=142 op=LOAD Jan 16 18:03:15.253000 audit[3055]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3040 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133303431343532353938633336373937363266393233613931643833 Jan 16 18:03:15.253000 audit: BPF prog-id=142 op=UNLOAD Jan 16 18:03:15.253000 audit[3055]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3040 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133303431343532353938633336373937363266393233613931643833 Jan 16 18:03:15.253000 audit: BPF prog-id=141 op=UNLOAD Jan 16 18:03:15.253000 audit[3055]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3040 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.253000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133303431343532353938633336373937363266393233613931643833 Jan 16 18:03:15.254000 audit: BPF prog-id=143 op=LOAD Jan 16 18:03:15.254000 audit[3055]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3040 pid=3055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.254000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6133303431343532353938633336373937363266393233613931643833 Jan 16 18:03:15.257000 audit[3089]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3089 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:03:15.257000 audit[3089]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd0c422f0 a2=0 a3=1 items=0 ppid=2953 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.257000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 16 18:03:15.260000 audit[3090]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3090 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:03:15.260000 audit[3090]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffbaedf80 a2=0 a3=1 items=0 ppid=2953 pid=3090 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.260000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 16 18:03:15.264000 audit[3092]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3092 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:03:15.264000 audit[3092]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffc2179df0 a2=0 a3=1 items=0 ppid=2953 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.264000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 16 18:03:15.300831 containerd[1593]: time="2026-01-16T18:03:15.300787527Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-p52pb,Uid:ab315de5-f7a7-4d88-96ff-0c63dbbb7157,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"a3041452598c3679762f923a91d83ba16d120245ae65758dbf7dd67f1135865b\"" Jan 16 18:03:15.300000 audit[3100]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3100 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:03:15.300000 audit[3100]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd4cf2a90 a2=0 a3=1 items=0 ppid=2953 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.300000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:03:15.303898 containerd[1593]: time="2026-01-16T18:03:15.303746322Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 16 18:03:15.311000 audit[3100]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3100 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:03:15.311000 audit[3100]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffd4cf2a90 a2=0 a3=1 items=0 ppid=2953 pid=3100 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.311000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:03:15.314000 audit[3110]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3110 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:03:15.314000 audit[3110]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd7f325a0 a2=0 a3=1 items=0 ppid=2953 pid=3110 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.314000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 16 18:03:15.318000 audit[3112]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3112 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:03:15.318000 audit[3112]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd27cbcd0 a2=0 a3=1 items=0 ppid=2953 pid=3112 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.318000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 16 18:03:15.324000 audit[3115]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3115 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:03:15.324000 audit[3115]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd4b6b950 a2=0 a3=1 items=0 ppid=2953 pid=3115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.324000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 16 18:03:15.326000 audit[3116]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3116 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:03:15.326000 audit[3116]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe3978230 a2=0 a3=1 items=0 ppid=2953 pid=3116 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.326000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 16 18:03:15.330000 audit[3118]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3118 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:03:15.330000 audit[3118]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff7ad9230 a2=0 a3=1 items=0 ppid=2953 pid=3118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.330000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 16 18:03:15.332000 audit[3119]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3119 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:03:15.332000 audit[3119]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffd067470 a2=0 a3=1 items=0 ppid=2953 pid=3119 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.332000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 16 18:03:15.336000 audit[3121]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3121 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:03:15.336000 audit[3121]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffdd826870 a2=0 a3=1 items=0 ppid=2953 pid=3121 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.336000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 16 18:03:15.341000 audit[3124]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3124 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:03:15.341000 audit[3124]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=fffffea356e0 a2=0 a3=1 items=0 ppid=2953 pid=3124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.341000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 16 18:03:15.342000 audit[3125]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3125 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:03:15.342000 audit[3125]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd5462430 a2=0 a3=1 items=0 ppid=2953 pid=3125 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.342000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 16 18:03:15.346000 audit[3127]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3127 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:03:15.346000 audit[3127]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd9bad490 a2=0 a3=1 items=0 ppid=2953 pid=3127 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.346000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 16 18:03:15.348000 audit[3128]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3128 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:03:15.348000 audit[3128]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc286fa90 a2=0 a3=1 items=0 ppid=2953 pid=3128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.348000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 16 18:03:15.353000 audit[3130]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3130 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:03:15.353000 audit[3130]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff69d53f0 a2=0 a3=1 items=0 ppid=2953 pid=3130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.353000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 16 18:03:15.358000 audit[3133]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3133 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:03:15.358000 audit[3133]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff97b2210 a2=0 a3=1 items=0 ppid=2953 pid=3133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.358000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 16 18:03:15.363000 audit[3136]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3136 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:03:15.363000 audit[3136]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffde7c1870 a2=0 a3=1 items=0 ppid=2953 pid=3136 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.363000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 16 18:03:15.365000 audit[3137]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3137 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:03:15.365000 audit[3137]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc4d838c0 a2=0 a3=1 items=0 ppid=2953 pid=3137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.365000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 16 18:03:15.370000 audit[3139]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3139 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:03:15.370000 audit[3139]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffd0497c80 a2=0 a3=1 items=0 ppid=2953 pid=3139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.370000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 16 18:03:15.375000 audit[3142]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3142 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:03:15.375000 audit[3142]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff08815e0 a2=0 a3=1 items=0 ppid=2953 pid=3142 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.375000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 16 18:03:15.377000 audit[3143]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3143 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:03:15.377000 audit[3143]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffde41b450 a2=0 a3=1 items=0 ppid=2953 pid=3143 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.377000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 16 18:03:15.382000 audit[3145]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3145 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:03:15.382000 audit[3145]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffd6eacb70 a2=0 a3=1 items=0 ppid=2953 pid=3145 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.382000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 16 18:03:15.384000 audit[3146]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3146 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:03:15.384000 audit[3146]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe9511260 a2=0 a3=1 items=0 ppid=2953 pid=3146 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.384000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 16 18:03:15.389000 audit[3148]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3148 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:03:15.389000 audit[3148]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffffb6e06d0 a2=0 a3=1 items=0 ppid=2953 pid=3148 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.389000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 16 18:03:15.397000 audit[3151]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3151 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:03:15.397000 audit[3151]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffff5b9cb40 a2=0 a3=1 items=0 ppid=2953 pid=3151 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.397000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 16 18:03:15.401000 audit[3153]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3153 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 16 18:03:15.401000 audit[3153]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffeae7adc0 a2=0 a3=1 items=0 ppid=2953 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.401000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:03:15.402000 audit[3153]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3153 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 16 18:03:15.402000 audit[3153]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffeae7adc0 a2=0 a3=1 items=0 ppid=2953 pid=3153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:15.402000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:03:16.052276 kubelet[2847]: I0116 18:03:16.051499 2847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-wv2fz" podStartSLOduration=2.051470107 podStartE2EDuration="2.051470107s" podCreationTimestamp="2026-01-16 18:03:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 18:03:15.201459222 +0000 UTC m=+6.233379553" watchObservedRunningTime="2026-01-16 18:03:16.051470107 +0000 UTC m=+7.083390438" Jan 16 18:03:17.208443 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1977231945.mount: Deactivated successfully. Jan 16 18:03:17.900352 containerd[1593]: time="2026-01-16T18:03:17.900224863Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:03:17.902353 containerd[1593]: time="2026-01-16T18:03:17.902260056Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Jan 16 18:03:17.903426 containerd[1593]: time="2026-01-16T18:03:17.903059016Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:03:17.906925 containerd[1593]: time="2026-01-16T18:03:17.906887196Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:03:17.907824 containerd[1593]: time="2026-01-16T18:03:17.907380609Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.603551135s" Jan 16 18:03:17.907824 containerd[1593]: time="2026-01-16T18:03:17.907416742Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 16 18:03:17.913869 containerd[1593]: time="2026-01-16T18:03:17.913824586Z" level=info msg="CreateContainer within sandbox \"a3041452598c3679762f923a91d83ba16d120245ae65758dbf7dd67f1135865b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 16 18:03:17.929991 containerd[1593]: time="2026-01-16T18:03:17.928127995Z" level=info msg="Container 47313bec8013a16112684727cc721a46fe5f7d6bbad0d4a2375850cefee80ea5: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:03:17.930376 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1426602449.mount: Deactivated successfully. Jan 16 18:03:17.942972 containerd[1593]: time="2026-01-16T18:03:17.942893085Z" level=info msg="CreateContainer within sandbox \"a3041452598c3679762f923a91d83ba16d120245ae65758dbf7dd67f1135865b\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"47313bec8013a16112684727cc721a46fe5f7d6bbad0d4a2375850cefee80ea5\"" Jan 16 18:03:17.945493 containerd[1593]: time="2026-01-16T18:03:17.945402444Z" level=info msg="StartContainer for \"47313bec8013a16112684727cc721a46fe5f7d6bbad0d4a2375850cefee80ea5\"" Jan 16 18:03:17.948406 containerd[1593]: time="2026-01-16T18:03:17.947702129Z" level=info msg="connecting to shim 47313bec8013a16112684727cc721a46fe5f7d6bbad0d4a2375850cefee80ea5" address="unix:///run/containerd/s/db9174735d8d8f33c2a87d093df4979ac72d0ab4f41557706c5c59e97d875bc2" protocol=ttrpc version=3 Jan 16 18:03:17.974197 systemd[1]: Started cri-containerd-47313bec8013a16112684727cc721a46fe5f7d6bbad0d4a2375850cefee80ea5.scope - libcontainer container 47313bec8013a16112684727cc721a46fe5f7d6bbad0d4a2375850cefee80ea5. Jan 16 18:03:17.991000 audit: BPF prog-id=144 op=LOAD Jan 16 18:03:17.992000 audit: BPF prog-id=145 op=LOAD Jan 16 18:03:17.992000 audit[3162]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3040 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:17.992000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437333133626563383031336131363131323638343732376363373231 Jan 16 18:03:17.992000 audit: BPF prog-id=145 op=UNLOAD Jan 16 18:03:17.992000 audit[3162]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3040 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:17.992000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437333133626563383031336131363131323638343732376363373231 Jan 16 18:03:17.993000 audit: BPF prog-id=146 op=LOAD Jan 16 18:03:17.993000 audit[3162]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3040 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:17.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437333133626563383031336131363131323638343732376363373231 Jan 16 18:03:17.993000 audit: BPF prog-id=147 op=LOAD Jan 16 18:03:17.993000 audit[3162]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3040 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:17.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437333133626563383031336131363131323638343732376363373231 Jan 16 18:03:17.993000 audit: BPF prog-id=147 op=UNLOAD Jan 16 18:03:17.993000 audit[3162]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3040 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:17.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437333133626563383031336131363131323638343732376363373231 Jan 16 18:03:17.993000 audit: BPF prog-id=146 op=UNLOAD Jan 16 18:03:17.993000 audit[3162]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3040 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:17.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437333133626563383031336131363131323638343732376363373231 Jan 16 18:03:17.993000 audit: BPF prog-id=148 op=LOAD Jan 16 18:03:17.993000 audit[3162]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3040 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:17.993000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3437333133626563383031336131363131323638343732376363373231 Jan 16 18:03:18.018124 containerd[1593]: time="2026-01-16T18:03:18.018082124Z" level=info msg="StartContainer for \"47313bec8013a16112684727cc721a46fe5f7d6bbad0d4a2375850cefee80ea5\" returns successfully" Jan 16 18:03:18.201465 kubelet[2847]: I0116 18:03:18.200976 2847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-p52pb" podStartSLOduration=1.5954875020000001 podStartE2EDuration="4.200937411s" podCreationTimestamp="2026-01-16 18:03:14 +0000 UTC" firstStartedPulling="2026-01-16 18:03:15.303109313 +0000 UTC m=+6.335029644" lastFinishedPulling="2026-01-16 18:03:17.908559222 +0000 UTC m=+8.940479553" observedRunningTime="2026-01-16 18:03:18.200796964 +0000 UTC m=+9.232717295" watchObservedRunningTime="2026-01-16 18:03:18.200937411 +0000 UTC m=+9.232857702" Jan 16 18:03:24.193640 sudo[1878]: pam_unix(sudo:session): session closed for user root Jan 16 18:03:24.193000 audit[1878]: USER_END pid=1878 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 18:03:24.195323 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 16 18:03:24.195403 kernel: audit: type=1106 audit(1768586604.193:506): pid=1878 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 18:03:24.193000 audit[1878]: CRED_DISP pid=1878 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 18:03:24.198743 kernel: audit: type=1104 audit(1768586604.193:507): pid=1878 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 18:03:24.292533 sshd[1877]: Connection closed by 68.220.241.50 port 45960 Jan 16 18:03:24.291979 sshd-session[1873]: pam_unix(sshd:session): session closed for user core Jan 16 18:03:24.292000 audit[1873]: USER_END pid=1873 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:24.292000 audit[1873]: CRED_DISP pid=1873 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:24.300070 kernel: audit: type=1106 audit(1768586604.292:508): pid=1873 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:24.300156 kernel: audit: type=1104 audit(1768586604.292:509): pid=1873 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:24.300777 systemd[1]: sshd@7-188.245.199.112:22-68.220.241.50:45960.service: Deactivated successfully. Jan 16 18:03:24.300000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-188.245.199.112:22-68.220.241.50:45960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:24.304169 kernel: audit: type=1131 audit(1768586604.300:510): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-188.245.199.112:22-68.220.241.50:45960 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:24.307031 systemd[1]: session-8.scope: Deactivated successfully. Jan 16 18:03:24.307330 systemd[1]: session-8.scope: Consumed 6.871s CPU time, 220.1M memory peak. Jan 16 18:03:24.309203 systemd-logind[1564]: Session 8 logged out. Waiting for processes to exit. Jan 16 18:03:24.312075 systemd-logind[1564]: Removed session 8. Jan 16 18:03:27.444000 audit[3244]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3244 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:03:27.444000 audit[3244]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe56d6420 a2=0 a3=1 items=0 ppid=2953 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:27.449482 kernel: audit: type=1325 audit(1768586607.444:511): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3244 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:03:27.449604 kernel: audit: type=1300 audit(1768586607.444:511): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe56d6420 a2=0 a3=1 items=0 ppid=2953 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:27.444000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:03:27.451027 kernel: audit: type=1327 audit(1768586607.444:511): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:03:27.452000 audit[3244]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3244 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:03:27.455986 kernel: audit: type=1325 audit(1768586607.452:512): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3244 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:03:27.452000 audit[3244]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe56d6420 a2=0 a3=1 items=0 ppid=2953 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:27.452000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:03:27.458968 kernel: audit: type=1300 audit(1768586607.452:512): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe56d6420 a2=0 a3=1 items=0 ppid=2953 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:27.472000 audit[3246]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3246 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:03:27.472000 audit[3246]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc7c80e70 a2=0 a3=1 items=0 ppid=2953 pid=3246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:27.472000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:03:27.479000 audit[3246]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3246 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:03:27.479000 audit[3246]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc7c80e70 a2=0 a3=1 items=0 ppid=2953 pid=3246 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:27.479000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:03:32.775000 audit[3248]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3248 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:03:32.777051 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 16 18:03:32.777112 kernel: audit: type=1325 audit(1768586612.775:515): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3248 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:03:32.775000 audit[3248]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe2efddc0 a2=0 a3=1 items=0 ppid=2953 pid=3248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:32.781296 kernel: audit: type=1300 audit(1768586612.775:515): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffe2efddc0 a2=0 a3=1 items=0 ppid=2953 pid=3248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:32.775000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:03:32.782978 kernel: audit: type=1327 audit(1768586612.775:515): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:03:32.784000 audit[3248]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3248 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:03:32.787981 kernel: audit: type=1325 audit(1768586612.784:516): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3248 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:03:32.788090 kernel: audit: type=1300 audit(1768586612.784:516): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe2efddc0 a2=0 a3=1 items=0 ppid=2953 pid=3248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:32.784000 audit[3248]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe2efddc0 a2=0 a3=1 items=0 ppid=2953 pid=3248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:32.784000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:03:32.793259 kernel: audit: type=1327 audit(1768586612.784:516): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:03:32.805000 audit[3250]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3250 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:03:32.805000 audit[3250]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffceb2e6c0 a2=0 a3=1 items=0 ppid=2953 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:32.810969 kernel: audit: type=1325 audit(1768586612.805:517): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3250 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:03:32.811093 kernel: audit: type=1300 audit(1768586612.805:517): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffceb2e6c0 a2=0 a3=1 items=0 ppid=2953 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:32.805000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:03:32.813334 kernel: audit: type=1327 audit(1768586612.805:517): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:03:32.813000 audit[3250]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3250 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:03:32.813000 audit[3250]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffceb2e6c0 a2=0 a3=1 items=0 ppid=2953 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:32.813000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:03:32.816983 kernel: audit: type=1325 audit(1768586612.813:518): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3250 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:03:33.865000 audit[3252]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3252 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:03:33.865000 audit[3252]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffd947a8f0 a2=0 a3=1 items=0 ppid=2953 pid=3252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:33.865000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:03:33.870000 audit[3252]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3252 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:03:33.870000 audit[3252]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd947a8f0 a2=0 a3=1 items=0 ppid=2953 pid=3252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:33.870000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:03:36.718874 systemd[1]: Created slice kubepods-besteffort-podeabd4912_97c3_4b0b_9ed0_04eb7018c8a7.slice - libcontainer container kubepods-besteffort-podeabd4912_97c3_4b0b_9ed0_04eb7018c8a7.slice. Jan 16 18:03:36.751000 audit[3254]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3254 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:03:36.751000 audit[3254]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffc8d15b90 a2=0 a3=1 items=0 ppid=2953 pid=3254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:36.751000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:03:36.757000 audit[3254]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3254 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:03:36.757000 audit[3254]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc8d15b90 a2=0 a3=1 items=0 ppid=2953 pid=3254 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:36.757000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:03:36.775725 kubelet[2847]: I0116 18:03:36.775632 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/eabd4912-97c3-4b0b-9ed0-04eb7018c8a7-tigera-ca-bundle\") pod \"calico-typha-cf6c97d76-bvq9q\" (UID: \"eabd4912-97c3-4b0b-9ed0-04eb7018c8a7\") " pod="calico-system/calico-typha-cf6c97d76-bvq9q" Jan 16 18:03:36.775725 kubelet[2847]: I0116 18:03:36.775679 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/eabd4912-97c3-4b0b-9ed0-04eb7018c8a7-typha-certs\") pod \"calico-typha-cf6c97d76-bvq9q\" (UID: \"eabd4912-97c3-4b0b-9ed0-04eb7018c8a7\") " pod="calico-system/calico-typha-cf6c97d76-bvq9q" Jan 16 18:03:36.775725 kubelet[2847]: I0116 18:03:36.775701 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccz9v\" (UniqueName: \"kubernetes.io/projected/eabd4912-97c3-4b0b-9ed0-04eb7018c8a7-kube-api-access-ccz9v\") pod \"calico-typha-cf6c97d76-bvq9q\" (UID: \"eabd4912-97c3-4b0b-9ed0-04eb7018c8a7\") " pod="calico-system/calico-typha-cf6c97d76-bvq9q" Jan 16 18:03:36.784000 audit[3256]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3256 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:03:36.784000 audit[3256]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffdce40860 a2=0 a3=1 items=0 ppid=2953 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:36.784000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:03:36.787000 audit[3256]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3256 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:03:36.787000 audit[3256]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdce40860 a2=0 a3=1 items=0 ppid=2953 pid=3256 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:36.787000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:03:36.861547 systemd[1]: Created slice kubepods-besteffort-podaa02cb8d_8a25_4da4_be15_5878de50f5d9.slice - libcontainer container kubepods-besteffort-podaa02cb8d_8a25_4da4_be15_5878de50f5d9.slice. Jan 16 18:03:36.877092 kubelet[2847]: I0116 18:03:36.876676 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/aa02cb8d-8a25-4da4-be15-5878de50f5d9-cni-net-dir\") pod \"calico-node-6fm7v\" (UID: \"aa02cb8d-8a25-4da4-be15-5878de50f5d9\") " pod="calico-system/calico-node-6fm7v" Jan 16 18:03:36.877092 kubelet[2847]: I0116 18:03:36.876738 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/aa02cb8d-8a25-4da4-be15-5878de50f5d9-lib-modules\") pod \"calico-node-6fm7v\" (UID: \"aa02cb8d-8a25-4da4-be15-5878de50f5d9\") " pod="calico-system/calico-node-6fm7v" Jan 16 18:03:36.877092 kubelet[2847]: I0116 18:03:36.876769 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/aa02cb8d-8a25-4da4-be15-5878de50f5d9-node-certs\") pod \"calico-node-6fm7v\" (UID: \"aa02cb8d-8a25-4da4-be15-5878de50f5d9\") " pod="calico-system/calico-node-6fm7v" Jan 16 18:03:36.877092 kubelet[2847]: I0116 18:03:36.876807 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/aa02cb8d-8a25-4da4-be15-5878de50f5d9-policysync\") pod \"calico-node-6fm7v\" (UID: \"aa02cb8d-8a25-4da4-be15-5878de50f5d9\") " pod="calico-system/calico-node-6fm7v" Jan 16 18:03:36.877092 kubelet[2847]: I0116 18:03:36.876848 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/aa02cb8d-8a25-4da4-be15-5878de50f5d9-var-run-calico\") pod \"calico-node-6fm7v\" (UID: \"aa02cb8d-8a25-4da4-be15-5878de50f5d9\") " pod="calico-system/calico-node-6fm7v" Jan 16 18:03:36.877454 kubelet[2847]: I0116 18:03:36.876882 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/aa02cb8d-8a25-4da4-be15-5878de50f5d9-flexvol-driver-host\") pod \"calico-node-6fm7v\" (UID: \"aa02cb8d-8a25-4da4-be15-5878de50f5d9\") " pod="calico-system/calico-node-6fm7v" Jan 16 18:03:36.877454 kubelet[2847]: I0116 18:03:36.876913 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/aa02cb8d-8a25-4da4-be15-5878de50f5d9-tigera-ca-bundle\") pod \"calico-node-6fm7v\" (UID: \"aa02cb8d-8a25-4da4-be15-5878de50f5d9\") " pod="calico-system/calico-node-6fm7v" Jan 16 18:03:36.878293 kubelet[2847]: I0116 18:03:36.878241 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/aa02cb8d-8a25-4da4-be15-5878de50f5d9-xtables-lock\") pod \"calico-node-6fm7v\" (UID: \"aa02cb8d-8a25-4da4-be15-5878de50f5d9\") " pod="calico-system/calico-node-6fm7v" Jan 16 18:03:36.878694 kubelet[2847]: I0116 18:03:36.878657 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/aa02cb8d-8a25-4da4-be15-5878de50f5d9-var-lib-calico\") pod \"calico-node-6fm7v\" (UID: \"aa02cb8d-8a25-4da4-be15-5878de50f5d9\") " pod="calico-system/calico-node-6fm7v" Jan 16 18:03:36.879416 kubelet[2847]: I0116 18:03:36.879338 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/aa02cb8d-8a25-4da4-be15-5878de50f5d9-cni-bin-dir\") pod \"calico-node-6fm7v\" (UID: \"aa02cb8d-8a25-4da4-be15-5878de50f5d9\") " pod="calico-system/calico-node-6fm7v" Jan 16 18:03:36.879818 kubelet[2847]: I0116 18:03:36.879392 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/aa02cb8d-8a25-4da4-be15-5878de50f5d9-cni-log-dir\") pod \"calico-node-6fm7v\" (UID: \"aa02cb8d-8a25-4da4-be15-5878de50f5d9\") " pod="calico-system/calico-node-6fm7v" Jan 16 18:03:36.879818 kubelet[2847]: I0116 18:03:36.879690 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dvtvj\" (UniqueName: \"kubernetes.io/projected/aa02cb8d-8a25-4da4-be15-5878de50f5d9-kube-api-access-dvtvj\") pod \"calico-node-6fm7v\" (UID: \"aa02cb8d-8a25-4da4-be15-5878de50f5d9\") " pod="calico-system/calico-node-6fm7v" Jan 16 18:03:36.995573 kubelet[2847]: E0116 18:03:36.994933 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:36.997267 kubelet[2847]: W0116 18:03:36.997022 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:36.997267 kubelet[2847]: E0116 18:03:36.997072 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:36.997510 kubelet[2847]: E0116 18:03:36.997494 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:36.997867 kubelet[2847]: W0116 18:03:36.997732 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:36.998190 kubelet[2847]: E0116 18:03:36.998093 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.014434 kubelet[2847]: E0116 18:03:37.014152 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.014914 kubelet[2847]: W0116 18:03:37.014886 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.016244 kubelet[2847]: E0116 18:03:37.016088 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.023749 containerd[1593]: time="2026-01-16T18:03:37.023539977Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cf6c97d76-bvq9q,Uid:eabd4912-97c3-4b0b-9ed0-04eb7018c8a7,Namespace:calico-system,Attempt:0,}" Jan 16 18:03:37.061068 containerd[1593]: time="2026-01-16T18:03:37.060105468Z" level=info msg="connecting to shim 4e58988391eaca7e6ef5cc8e600a956de94908905d3b9e36b8b6024c56f4716d" address="unix:///run/containerd/s/f7cefbd23db02984ac5ab1ae2fee9fa56826beab350b88b1c2e17f6285ef399f" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:03:37.069424 kubelet[2847]: E0116 18:03:37.069379 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x7tcq" podUID="c7bb61dc-6bdf-4bf7-9a33-c67b671e2820" Jan 16 18:03:37.099409 systemd[1]: Started cri-containerd-4e58988391eaca7e6ef5cc8e600a956de94908905d3b9e36b8b6024c56f4716d.scope - libcontainer container 4e58988391eaca7e6ef5cc8e600a956de94908905d3b9e36b8b6024c56f4716d. Jan 16 18:03:37.125000 audit: BPF prog-id=149 op=LOAD Jan 16 18:03:37.127000 audit: BPF prog-id=150 op=LOAD Jan 16 18:03:37.127000 audit[3289]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3274 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:37.127000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465353839383833393165616361376536656635636338653630306139 Jan 16 18:03:37.129000 audit: BPF prog-id=150 op=UNLOAD Jan 16 18:03:37.129000 audit[3289]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3274 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:37.129000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465353839383833393165616361376536656635636338653630306139 Jan 16 18:03:37.130000 audit: BPF prog-id=151 op=LOAD Jan 16 18:03:37.130000 audit[3289]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3274 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:37.130000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465353839383833393165616361376536656635636338653630306139 Jan 16 18:03:37.132000 audit: BPF prog-id=152 op=LOAD Jan 16 18:03:37.132000 audit[3289]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3274 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:37.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465353839383833393165616361376536656635636338653630306139 Jan 16 18:03:37.132000 audit: BPF prog-id=152 op=UNLOAD Jan 16 18:03:37.132000 audit[3289]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3274 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:37.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465353839383833393165616361376536656635636338653630306139 Jan 16 18:03:37.132000 audit: BPF prog-id=151 op=UNLOAD Jan 16 18:03:37.132000 audit[3289]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3274 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:37.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465353839383833393165616361376536656635636338653630306139 Jan 16 18:03:37.132000 audit: BPF prog-id=153 op=LOAD Jan 16 18:03:37.132000 audit[3289]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3274 pid=3289 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:37.132000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3465353839383833393165616361376536656635636338653630306139 Jan 16 18:03:37.157871 kubelet[2847]: E0116 18:03:37.157818 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.157871 kubelet[2847]: W0116 18:03:37.157850 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.158093 kubelet[2847]: E0116 18:03:37.157882 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.158322 kubelet[2847]: E0116 18:03:37.158143 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.158322 kubelet[2847]: W0116 18:03:37.158160 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.158322 kubelet[2847]: E0116 18:03:37.158212 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.158417 kubelet[2847]: E0116 18:03:37.158364 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.158417 kubelet[2847]: W0116 18:03:37.158373 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.158417 kubelet[2847]: E0116 18:03:37.158382 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.159059 kubelet[2847]: E0116 18:03:37.158526 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.159059 kubelet[2847]: W0116 18:03:37.158539 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.159059 kubelet[2847]: E0116 18:03:37.158547 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.159059 kubelet[2847]: E0116 18:03:37.158701 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.159059 kubelet[2847]: W0116 18:03:37.158708 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.159059 kubelet[2847]: E0116 18:03:37.158716 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.159059 kubelet[2847]: E0116 18:03:37.158870 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.159059 kubelet[2847]: W0116 18:03:37.158889 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.159059 kubelet[2847]: E0116 18:03:37.158898 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.160009 kubelet[2847]: E0116 18:03:37.159218 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.160075 kubelet[2847]: W0116 18:03:37.160012 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.160075 kubelet[2847]: E0116 18:03:37.160035 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.160467 kubelet[2847]: E0116 18:03:37.160431 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.160467 kubelet[2847]: W0116 18:03:37.160449 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.160467 kubelet[2847]: E0116 18:03:37.160461 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.160966 kubelet[2847]: E0116 18:03:37.160777 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.160966 kubelet[2847]: W0116 18:03:37.160794 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.160966 kubelet[2847]: E0116 18:03:37.160805 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.161123 kubelet[2847]: E0116 18:03:37.160994 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.161123 kubelet[2847]: W0116 18:03:37.161004 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.161123 kubelet[2847]: E0116 18:03:37.161012 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.161191 kubelet[2847]: E0116 18:03:37.161160 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.161191 kubelet[2847]: W0116 18:03:37.161167 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.161191 kubelet[2847]: E0116 18:03:37.161175 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.161563 kubelet[2847]: E0116 18:03:37.161317 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.161563 kubelet[2847]: W0116 18:03:37.161400 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.161563 kubelet[2847]: E0116 18:03:37.161413 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.162052 kubelet[2847]: E0116 18:03:37.161581 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.162052 kubelet[2847]: W0116 18:03:37.161589 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.162052 kubelet[2847]: E0116 18:03:37.161597 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.162052 kubelet[2847]: E0116 18:03:37.161727 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.162052 kubelet[2847]: W0116 18:03:37.161734 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.162052 kubelet[2847]: E0116 18:03:37.161740 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.162052 kubelet[2847]: E0116 18:03:37.161865 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.162052 kubelet[2847]: W0116 18:03:37.161935 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.162052 kubelet[2847]: E0116 18:03:37.161961 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.163156 kubelet[2847]: E0116 18:03:37.163095 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.163156 kubelet[2847]: W0116 18:03:37.163114 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.163156 kubelet[2847]: E0116 18:03:37.163127 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.163337 kubelet[2847]: E0116 18:03:37.163313 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.163337 kubelet[2847]: W0116 18:03:37.163328 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.163337 kubelet[2847]: E0116 18:03:37.163337 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.163471 kubelet[2847]: E0116 18:03:37.163451 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.163471 kubelet[2847]: W0116 18:03:37.163462 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.163471 kubelet[2847]: E0116 18:03:37.163470 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.163696 kubelet[2847]: E0116 18:03:37.163678 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.163696 kubelet[2847]: W0116 18:03:37.163691 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.163759 kubelet[2847]: E0116 18:03:37.163700 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.163964 kubelet[2847]: E0116 18:03:37.163818 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.163964 kubelet[2847]: W0116 18:03:37.163829 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.163964 kubelet[2847]: E0116 18:03:37.163836 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.166803 containerd[1593]: time="2026-01-16T18:03:37.166758754Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6fm7v,Uid:aa02cb8d-8a25-4da4-be15-5878de50f5d9,Namespace:calico-system,Attempt:0,}" Jan 16 18:03:37.188322 kubelet[2847]: E0116 18:03:37.188079 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.188322 kubelet[2847]: W0116 18:03:37.188107 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.188322 kubelet[2847]: E0116 18:03:37.188135 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.188322 kubelet[2847]: I0116 18:03:37.188169 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/c7bb61dc-6bdf-4bf7-9a33-c67b671e2820-kubelet-dir\") pod \"csi-node-driver-x7tcq\" (UID: \"c7bb61dc-6bdf-4bf7-9a33-c67b671e2820\") " pod="calico-system/csi-node-driver-x7tcq" Jan 16 18:03:37.190020 kubelet[2847]: E0116 18:03:37.189570 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.190020 kubelet[2847]: W0116 18:03:37.189594 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.190020 kubelet[2847]: E0116 18:03:37.189614 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.190020 kubelet[2847]: I0116 18:03:37.189645 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/c7bb61dc-6bdf-4bf7-9a33-c67b671e2820-registration-dir\") pod \"csi-node-driver-x7tcq\" (UID: \"c7bb61dc-6bdf-4bf7-9a33-c67b671e2820\") " pod="calico-system/csi-node-driver-x7tcq" Jan 16 18:03:37.191494 kubelet[2847]: E0116 18:03:37.191277 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.191494 kubelet[2847]: W0116 18:03:37.191305 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.191494 kubelet[2847]: E0116 18:03:37.191325 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.191494 kubelet[2847]: I0116 18:03:37.191389 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7bx5f\" (UniqueName: \"kubernetes.io/projected/c7bb61dc-6bdf-4bf7-9a33-c67b671e2820-kube-api-access-7bx5f\") pod \"csi-node-driver-x7tcq\" (UID: \"c7bb61dc-6bdf-4bf7-9a33-c67b671e2820\") " pod="calico-system/csi-node-driver-x7tcq" Jan 16 18:03:37.193059 kubelet[2847]: E0116 18:03:37.192766 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.193059 kubelet[2847]: W0116 18:03:37.192786 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.193059 kubelet[2847]: E0116 18:03:37.192803 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.193695 kubelet[2847]: E0116 18:03:37.193422 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.193695 kubelet[2847]: W0116 18:03:37.193441 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.193695 kubelet[2847]: E0116 18:03:37.193457 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.195129 kubelet[2847]: E0116 18:03:37.195092 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.195399 kubelet[2847]: W0116 18:03:37.195227 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.195399 kubelet[2847]: E0116 18:03:37.195259 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.195479 kubelet[2847]: I0116 18:03:37.195384 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/c7bb61dc-6bdf-4bf7-9a33-c67b671e2820-socket-dir\") pod \"csi-node-driver-x7tcq\" (UID: \"c7bb61dc-6bdf-4bf7-9a33-c67b671e2820\") " pod="calico-system/csi-node-driver-x7tcq" Jan 16 18:03:37.195691 kubelet[2847]: E0116 18:03:37.195676 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.197280 kubelet[2847]: W0116 18:03:37.196001 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.197280 kubelet[2847]: E0116 18:03:37.196026 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.197527 kubelet[2847]: E0116 18:03:37.197509 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.199006 kubelet[2847]: W0116 18:03:37.198784 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.199006 kubelet[2847]: E0116 18:03:37.198818 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.199381 kubelet[2847]: E0116 18:03:37.199252 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.199381 kubelet[2847]: W0116 18:03:37.199267 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.199381 kubelet[2847]: E0116 18:03:37.199280 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.200215 kubelet[2847]: E0116 18:03:37.200110 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.200726 kubelet[2847]: W0116 18:03:37.200323 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.200839 kubelet[2847]: E0116 18:03:37.200822 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.200931 kubelet[2847]: I0116 18:03:37.200916 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/c7bb61dc-6bdf-4bf7-9a33-c67b671e2820-varrun\") pod \"csi-node-driver-x7tcq\" (UID: \"c7bb61dc-6bdf-4bf7-9a33-c67b671e2820\") " pod="calico-system/csi-node-driver-x7tcq" Jan 16 18:03:37.201860 kubelet[2847]: E0116 18:03:37.201794 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.201860 kubelet[2847]: W0116 18:03:37.201819 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.201860 kubelet[2847]: E0116 18:03:37.201833 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.202535 kubelet[2847]: E0116 18:03:37.202378 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.202535 kubelet[2847]: W0116 18:03:37.202393 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.202535 kubelet[2847]: E0116 18:03:37.202408 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.202988 kubelet[2847]: E0116 18:03:37.202838 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.202988 kubelet[2847]: W0116 18:03:37.202850 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.202988 kubelet[2847]: E0116 18:03:37.202862 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.203437 kubelet[2847]: E0116 18:03:37.203416 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.203437 kubelet[2847]: W0116 18:03:37.203433 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.204348 kubelet[2847]: E0116 18:03:37.203445 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.205322 kubelet[2847]: E0116 18:03:37.205279 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.205322 kubelet[2847]: W0116 18:03:37.205312 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.205411 kubelet[2847]: E0116 18:03:37.205330 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.212897 containerd[1593]: time="2026-01-16T18:03:37.212833300Z" level=info msg="connecting to shim 88eb6f811794d77b80ed9d00630c0d7be2f88145d08f106c9cb0a5dcf70e3ef8" address="unix:///run/containerd/s/aef0ed511ffac975d2eabfb300c57ca780d0496c24e3b749d6b3fe58e0b20a11" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:03:37.256612 systemd[1]: Started cri-containerd-88eb6f811794d77b80ed9d00630c0d7be2f88145d08f106c9cb0a5dcf70e3ef8.scope - libcontainer container 88eb6f811794d77b80ed9d00630c0d7be2f88145d08f106c9cb0a5dcf70e3ef8. Jan 16 18:03:37.269992 containerd[1593]: time="2026-01-16T18:03:37.269930512Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-cf6c97d76-bvq9q,Uid:eabd4912-97c3-4b0b-9ed0-04eb7018c8a7,Namespace:calico-system,Attempt:0,} returns sandbox id \"4e58988391eaca7e6ef5cc8e600a956de94908905d3b9e36b8b6024c56f4716d\"" Jan 16 18:03:37.273203 containerd[1593]: time="2026-01-16T18:03:37.273047429Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 16 18:03:37.290000 audit: BPF prog-id=154 op=LOAD Jan 16 18:03:37.291000 audit: BPF prog-id=155 op=LOAD Jan 16 18:03:37.291000 audit[3369]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001f8180 a2=98 a3=0 items=0 ppid=3358 pid=3369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:37.291000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838656236663831313739346437376238306564396430303633306330 Jan 16 18:03:37.291000 audit: BPF prog-id=155 op=UNLOAD Jan 16 18:03:37.291000 audit[3369]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3358 pid=3369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:37.291000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838656236663831313739346437376238306564396430303633306330 Jan 16 18:03:37.292000 audit: BPF prog-id=156 op=LOAD Jan 16 18:03:37.292000 audit[3369]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001f83e8 a2=98 a3=0 items=0 ppid=3358 pid=3369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:37.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838656236663831313739346437376238306564396430303633306330 Jan 16 18:03:37.292000 audit: BPF prog-id=157 op=LOAD Jan 16 18:03:37.292000 audit[3369]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001f8168 a2=98 a3=0 items=0 ppid=3358 pid=3369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:37.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838656236663831313739346437376238306564396430303633306330 Jan 16 18:03:37.292000 audit: BPF prog-id=157 op=UNLOAD Jan 16 18:03:37.292000 audit[3369]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3358 pid=3369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:37.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838656236663831313739346437376238306564396430303633306330 Jan 16 18:03:37.292000 audit: BPF prog-id=156 op=UNLOAD Jan 16 18:03:37.292000 audit[3369]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3358 pid=3369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:37.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838656236663831313739346437376238306564396430303633306330 Jan 16 18:03:37.292000 audit: BPF prog-id=158 op=LOAD Jan 16 18:03:37.292000 audit[3369]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001f8648 a2=98 a3=0 items=0 ppid=3358 pid=3369 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:37.292000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3838656236663831313739346437376238306564396430303633306330 Jan 16 18:03:37.304928 kubelet[2847]: E0116 18:03:37.304896 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.304928 kubelet[2847]: W0116 18:03:37.304920 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.304928 kubelet[2847]: E0116 18:03:37.304942 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.306083 kubelet[2847]: E0116 18:03:37.306048 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.306083 kubelet[2847]: W0116 18:03:37.306076 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.306297 kubelet[2847]: E0116 18:03:37.306096 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.307635 kubelet[2847]: E0116 18:03:37.307612 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.307806 kubelet[2847]: W0116 18:03:37.307692 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.307806 kubelet[2847]: E0116 18:03:37.307714 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.309141 kubelet[2847]: E0116 18:03:37.309077 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.309141 kubelet[2847]: W0116 18:03:37.309093 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.309141 kubelet[2847]: E0116 18:03:37.309106 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.311033 kubelet[2847]: E0116 18:03:37.310895 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.311033 kubelet[2847]: W0116 18:03:37.310914 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.311033 kubelet[2847]: E0116 18:03:37.310934 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.312348 kubelet[2847]: E0116 18:03:37.311771 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.314134 kubelet[2847]: W0116 18:03:37.313163 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.314134 kubelet[2847]: E0116 18:03:37.313202 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.314134 kubelet[2847]: E0116 18:03:37.313990 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.314134 kubelet[2847]: W0116 18:03:37.314004 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.314134 kubelet[2847]: E0116 18:03:37.314069 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.314974 kubelet[2847]: E0116 18:03:37.314787 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.315112 kubelet[2847]: W0116 18:03:37.315082 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.315929 kubelet[2847]: E0116 18:03:37.315790 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.316446 kubelet[2847]: E0116 18:03:37.316353 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.316930 kubelet[2847]: W0116 18:03:37.316741 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.316930 kubelet[2847]: E0116 18:03:37.316767 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.317965 kubelet[2847]: E0116 18:03:37.317772 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.317965 kubelet[2847]: W0116 18:03:37.317877 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.317965 kubelet[2847]: E0116 18:03:37.317894 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.319551 kubelet[2847]: E0116 18:03:37.319534 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.319881 kubelet[2847]: W0116 18:03:37.319721 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.319881 kubelet[2847]: E0116 18:03:37.319745 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.320962 kubelet[2847]: E0116 18:03:37.320839 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.321418 kubelet[2847]: W0116 18:03:37.321392 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.321874 kubelet[2847]: E0116 18:03:37.321764 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.322860 kubelet[2847]: E0116 18:03:37.322842 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.323299 kubelet[2847]: W0116 18:03:37.322903 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.323299 kubelet[2847]: E0116 18:03:37.322920 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.323828 kubelet[2847]: E0116 18:03:37.323724 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.324463 kubelet[2847]: W0116 18:03:37.323971 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.324463 kubelet[2847]: E0116 18:03:37.324188 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.325764 kubelet[2847]: E0116 18:03:37.325746 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.326035 kubelet[2847]: W0116 18:03:37.325855 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.326035 kubelet[2847]: E0116 18:03:37.325877 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.328072 containerd[1593]: time="2026-01-16T18:03:37.327225911Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-6fm7v,Uid:aa02cb8d-8a25-4da4-be15-5878de50f5d9,Namespace:calico-system,Attempt:0,} returns sandbox id \"88eb6f811794d77b80ed9d00630c0d7be2f88145d08f106c9cb0a5dcf70e3ef8\"" Jan 16 18:03:37.329428 kubelet[2847]: E0116 18:03:37.328968 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.329428 kubelet[2847]: W0116 18:03:37.328988 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.329428 kubelet[2847]: E0116 18:03:37.329004 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.330056 kubelet[2847]: E0116 18:03:37.329730 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.330056 kubelet[2847]: W0116 18:03:37.329745 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.330056 kubelet[2847]: E0116 18:03:37.329758 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.330711 kubelet[2847]: E0116 18:03:37.330313 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.330711 kubelet[2847]: W0116 18:03:37.330329 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.330711 kubelet[2847]: E0116 18:03:37.330341 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.331938 kubelet[2847]: E0116 18:03:37.331812 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.331938 kubelet[2847]: W0116 18:03:37.331830 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.331938 kubelet[2847]: E0116 18:03:37.331843 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.333738 kubelet[2847]: E0116 18:03:37.333631 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.334186 kubelet[2847]: W0116 18:03:37.334161 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.334621 kubelet[2847]: E0116 18:03:37.334600 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.335380 kubelet[2847]: E0116 18:03:37.335363 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.336341 kubelet[2847]: W0116 18:03:37.336318 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.336464 kubelet[2847]: E0116 18:03:37.336450 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.336830 kubelet[2847]: E0116 18:03:37.336814 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.336964 kubelet[2847]: W0116 18:03:37.336909 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.336964 kubelet[2847]: E0116 18:03:37.336929 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.338118 kubelet[2847]: E0116 18:03:37.337457 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.338118 kubelet[2847]: W0116 18:03:37.337470 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.338118 kubelet[2847]: E0116 18:03:37.337484 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.338118 kubelet[2847]: E0116 18:03:37.337807 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.338118 kubelet[2847]: W0116 18:03:37.337818 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.338118 kubelet[2847]: E0116 18:03:37.337831 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.338118 kubelet[2847]: E0116 18:03:37.338070 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.338118 kubelet[2847]: W0116 18:03:37.338082 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.338118 kubelet[2847]: E0116 18:03:37.338093 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:37.356774 kubelet[2847]: E0116 18:03:37.356720 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:37.356774 kubelet[2847]: W0116 18:03:37.356767 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:37.356774 kubelet[2847]: E0116 18:03:37.356789 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:38.700263 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1647380653.mount: Deactivated successfully. Jan 16 18:03:39.101414 kubelet[2847]: E0116 18:03:39.100917 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x7tcq" podUID="c7bb61dc-6bdf-4bf7-9a33-c67b671e2820" Jan 16 18:03:39.188638 containerd[1593]: time="2026-01-16T18:03:39.188570633Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:03:39.190629 containerd[1593]: time="2026-01-16T18:03:39.190570014Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=0" Jan 16 18:03:39.191722 containerd[1593]: time="2026-01-16T18:03:39.191675119Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:03:39.197054 containerd[1593]: time="2026-01-16T18:03:39.196809189Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:03:39.200529 containerd[1593]: time="2026-01-16T18:03:39.200429823Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.927336827s" Jan 16 18:03:39.200529 containerd[1593]: time="2026-01-16T18:03:39.200474068Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 16 18:03:39.204959 containerd[1593]: time="2026-01-16T18:03:39.204183393Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 16 18:03:39.222492 containerd[1593]: time="2026-01-16T18:03:39.222449020Z" level=info msg="CreateContainer within sandbox \"4e58988391eaca7e6ef5cc8e600a956de94908905d3b9e36b8b6024c56f4716d\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 16 18:03:39.232738 containerd[1593]: time="2026-01-16T18:03:39.232691999Z" level=info msg="Container 6a707d37fe5dbee1cfaa98a1ed8464a010d45b475131f73203a5939eb3885e97: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:03:39.249430 containerd[1593]: time="2026-01-16T18:03:39.249362657Z" level=info msg="CreateContainer within sandbox \"4e58988391eaca7e6ef5cc8e600a956de94908905d3b9e36b8b6024c56f4716d\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"6a707d37fe5dbee1cfaa98a1ed8464a010d45b475131f73203a5939eb3885e97\"" Jan 16 18:03:39.256794 containerd[1593]: time="2026-01-16T18:03:39.256413498Z" level=info msg="StartContainer for \"6a707d37fe5dbee1cfaa98a1ed8464a010d45b475131f73203a5939eb3885e97\"" Jan 16 18:03:39.264976 containerd[1593]: time="2026-01-16T18:03:39.264402942Z" level=info msg="connecting to shim 6a707d37fe5dbee1cfaa98a1ed8464a010d45b475131f73203a5939eb3885e97" address="unix:///run/containerd/s/f7cefbd23db02984ac5ab1ae2fee9fa56826beab350b88b1c2e17f6285ef399f" protocol=ttrpc version=3 Jan 16 18:03:39.294477 systemd[1]: Started cri-containerd-6a707d37fe5dbee1cfaa98a1ed8464a010d45b475131f73203a5939eb3885e97.scope - libcontainer container 6a707d37fe5dbee1cfaa98a1ed8464a010d45b475131f73203a5939eb3885e97. Jan 16 18:03:39.314000 audit: BPF prog-id=159 op=LOAD Jan 16 18:03:39.317051 kernel: kauditd_printk_skb: 64 callbacks suppressed Jan 16 18:03:39.317103 kernel: audit: type=1334 audit(1768586619.314:541): prog-id=159 op=LOAD Jan 16 18:03:39.316000 audit: BPF prog-id=160 op=LOAD Jan 16 18:03:39.316000 audit[3439]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3274 pid=3439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:39.321093 kernel: audit: type=1334 audit(1768586619.316:542): prog-id=160 op=LOAD Jan 16 18:03:39.321446 kernel: audit: type=1300 audit(1768586619.316:542): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3274 pid=3439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:39.316000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661373037643337666535646265653163666161393861316564383436 Jan 16 18:03:39.323627 kernel: audit: type=1327 audit(1768586619.316:542): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661373037643337666535646265653163666161393861316564383436 Jan 16 18:03:39.317000 audit: BPF prog-id=160 op=UNLOAD Jan 16 18:03:39.325417 kernel: audit: type=1334 audit(1768586619.317:543): prog-id=160 op=UNLOAD Jan 16 18:03:39.325571 kernel: audit: type=1300 audit(1768586619.317:543): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3274 pid=3439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:39.317000 audit[3439]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3274 pid=3439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:39.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661373037643337666535646265653163666161393861316564383436 Jan 16 18:03:39.329929 kernel: audit: type=1327 audit(1768586619.317:543): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661373037643337666535646265653163666161393861316564383436 Jan 16 18:03:39.317000 audit: BPF prog-id=161 op=LOAD Jan 16 18:03:39.317000 audit[3439]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3274 pid=3439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:39.333382 kernel: audit: type=1334 audit(1768586619.317:544): prog-id=161 op=LOAD Jan 16 18:03:39.333432 kernel: audit: type=1300 audit(1768586619.317:544): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3274 pid=3439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:39.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661373037643337666535646265653163666161393861316564383436 Jan 16 18:03:39.336368 kernel: audit: type=1327 audit(1768586619.317:544): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661373037643337666535646265653163666161393861316564383436 Jan 16 18:03:39.317000 audit: BPF prog-id=162 op=LOAD Jan 16 18:03:39.317000 audit[3439]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3274 pid=3439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:39.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661373037643337666535646265653163666161393861316564383436 Jan 16 18:03:39.317000 audit: BPF prog-id=162 op=UNLOAD Jan 16 18:03:39.317000 audit[3439]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3274 pid=3439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:39.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661373037643337666535646265653163666161393861316564383436 Jan 16 18:03:39.317000 audit: BPF prog-id=161 op=UNLOAD Jan 16 18:03:39.317000 audit[3439]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3274 pid=3439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:39.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661373037643337666535646265653163666161393861316564383436 Jan 16 18:03:39.317000 audit: BPF prog-id=163 op=LOAD Jan 16 18:03:39.317000 audit[3439]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3274 pid=3439 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:39.317000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3661373037643337666535646265653163666161393861316564383436 Jan 16 18:03:39.370556 containerd[1593]: time="2026-01-16T18:03:39.370413556Z" level=info msg="StartContainer for \"6a707d37fe5dbee1cfaa98a1ed8464a010d45b475131f73203a5939eb3885e97\" returns successfully" Jan 16 18:03:40.283262 kubelet[2847]: E0116 18:03:40.283233 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:40.283818 kubelet[2847]: W0116 18:03:40.283465 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:40.283818 kubelet[2847]: E0116 18:03:40.283491 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:40.283916 kubelet[2847]: E0116 18:03:40.283901 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:40.283994 kubelet[2847]: W0116 18:03:40.283916 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:40.284189 kubelet[2847]: E0116 18:03:40.284167 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:40.284816 kubelet[2847]: E0116 18:03:40.284799 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:40.284816 kubelet[2847]: W0116 18:03:40.284815 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:40.284992 kubelet[2847]: E0116 18:03:40.284831 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:40.285150 kubelet[2847]: E0116 18:03:40.285137 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:40.285343 kubelet[2847]: W0116 18:03:40.285150 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:40.285343 kubelet[2847]: E0116 18:03:40.285161 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:40.285627 kubelet[2847]: E0116 18:03:40.285611 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:40.285700 kubelet[2847]: W0116 18:03:40.285628 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:40.285700 kubelet[2847]: E0116 18:03:40.285642 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:40.287037 kubelet[2847]: E0116 18:03:40.287020 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:40.287037 kubelet[2847]: W0116 18:03:40.287036 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:40.287115 kubelet[2847]: E0116 18:03:40.287048 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:40.287566 kubelet[2847]: E0116 18:03:40.287549 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:40.287611 kubelet[2847]: W0116 18:03:40.287566 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:40.288895 kubelet[2847]: E0116 18:03:40.287579 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:40.289772 kubelet[2847]: E0116 18:03:40.289752 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:40.289772 kubelet[2847]: W0116 18:03:40.289772 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:40.289869 kubelet[2847]: E0116 18:03:40.289785 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:40.290031 kubelet[2847]: E0116 18:03:40.290019 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:40.290094 kubelet[2847]: W0116 18:03:40.290031 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:40.290094 kubelet[2847]: E0116 18:03:40.290041 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:40.290262 kubelet[2847]: E0116 18:03:40.290252 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:40.290294 kubelet[2847]: W0116 18:03:40.290263 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:40.290294 kubelet[2847]: E0116 18:03:40.290280 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:40.290480 kubelet[2847]: E0116 18:03:40.290466 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:40.290480 kubelet[2847]: W0116 18:03:40.290479 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:40.290553 kubelet[2847]: E0116 18:03:40.290489 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:40.290777 kubelet[2847]: E0116 18:03:40.290763 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:40.290777 kubelet[2847]: W0116 18:03:40.290776 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:40.290842 kubelet[2847]: E0116 18:03:40.290787 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:40.291026 kubelet[2847]: E0116 18:03:40.291012 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:40.291159 kubelet[2847]: W0116 18:03:40.291088 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:40.291159 kubelet[2847]: E0116 18:03:40.291107 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:40.291472 kubelet[2847]: E0116 18:03:40.291353 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:40.291472 kubelet[2847]: W0116 18:03:40.291365 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:40.291472 kubelet[2847]: E0116 18:03:40.291378 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:40.291638 kubelet[2847]: E0116 18:03:40.291628 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:40.291809 kubelet[2847]: W0116 18:03:40.291726 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:40.291809 kubelet[2847]: E0116 18:03:40.291745 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:40.345736 kubelet[2847]: E0116 18:03:40.345614 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:40.345736 kubelet[2847]: W0116 18:03:40.345677 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:40.346362 kubelet[2847]: E0116 18:03:40.346117 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:40.347185 kubelet[2847]: E0116 18:03:40.347125 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:40.347392 kubelet[2847]: W0116 18:03:40.347155 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:40.347392 kubelet[2847]: E0116 18:03:40.347339 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:40.347773 kubelet[2847]: E0116 18:03:40.347746 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:40.347773 kubelet[2847]: W0116 18:03:40.347771 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:40.347916 kubelet[2847]: E0116 18:03:40.347789 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:40.348073 kubelet[2847]: E0116 18:03:40.347989 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:40.348073 kubelet[2847]: W0116 18:03:40.347999 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:40.348073 kubelet[2847]: E0116 18:03:40.348010 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:40.348186 kubelet[2847]: E0116 18:03:40.348169 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:40.348186 kubelet[2847]: W0116 18:03:40.348178 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:40.348287 kubelet[2847]: E0116 18:03:40.348189 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:40.348386 kubelet[2847]: E0116 18:03:40.348375 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:40.348481 kubelet[2847]: W0116 18:03:40.348387 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:40.348481 kubelet[2847]: E0116 18:03:40.348398 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:40.348826 kubelet[2847]: E0116 18:03:40.348796 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:40.348916 kubelet[2847]: W0116 18:03:40.348903 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:40.349135 kubelet[2847]: E0116 18:03:40.349005 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:40.350060 kubelet[2847]: E0116 18:03:40.349714 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:40.350060 kubelet[2847]: W0116 18:03:40.349732 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:40.350060 kubelet[2847]: E0116 18:03:40.349757 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:40.350591 kubelet[2847]: E0116 18:03:40.350563 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:40.350750 kubelet[2847]: W0116 18:03:40.350723 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:40.350935 kubelet[2847]: E0116 18:03:40.350896 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:40.351513 kubelet[2847]: E0116 18:03:40.351477 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:40.351678 kubelet[2847]: W0116 18:03:40.351636 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:40.351831 kubelet[2847]: E0116 18:03:40.351806 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:40.352730 kubelet[2847]: E0116 18:03:40.352323 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:40.352730 kubelet[2847]: W0116 18:03:40.352383 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:40.352869 kubelet[2847]: E0116 18:03:40.352851 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:40.353241 kubelet[2847]: E0116 18:03:40.353225 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:40.353357 kubelet[2847]: W0116 18:03:40.353341 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:40.353494 kubelet[2847]: E0116 18:03:40.353477 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:40.354066 kubelet[2847]: E0116 18:03:40.354053 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:40.354148 kubelet[2847]: W0116 18:03:40.354137 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:40.354223 kubelet[2847]: E0116 18:03:40.354212 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:40.356123 kubelet[2847]: E0116 18:03:40.356109 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:40.356235 kubelet[2847]: W0116 18:03:40.356224 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:40.356293 kubelet[2847]: E0116 18:03:40.356282 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:40.356511 kubelet[2847]: E0116 18:03:40.356500 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:40.356687 kubelet[2847]: W0116 18:03:40.356561 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:40.356687 kubelet[2847]: E0116 18:03:40.356575 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:40.356827 kubelet[2847]: E0116 18:03:40.356818 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:40.356888 kubelet[2847]: W0116 18:03:40.356878 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:40.356992 kubelet[2847]: E0116 18:03:40.356928 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:40.357317 kubelet[2847]: E0116 18:03:40.357305 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:40.357499 kubelet[2847]: W0116 18:03:40.357382 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:40.357499 kubelet[2847]: E0116 18:03:40.357404 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:40.357642 kubelet[2847]: E0116 18:03:40.357633 2847 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:03:40.357711 kubelet[2847]: W0116 18:03:40.357701 2847 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:03:40.357780 kubelet[2847]: E0116 18:03:40.357751 2847 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:03:40.624494 containerd[1593]: time="2026-01-16T18:03:40.624294523Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:03:40.627967 containerd[1593]: time="2026-01-16T18:03:40.627724716Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 16 18:03:40.628885 containerd[1593]: time="2026-01-16T18:03:40.628788251Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:03:40.632462 containerd[1593]: time="2026-01-16T18:03:40.632209203Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:03:40.632840 containerd[1593]: time="2026-01-16T18:03:40.632808358Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.42858368s" Jan 16 18:03:40.632891 containerd[1593]: time="2026-01-16T18:03:40.632843683Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 16 18:03:40.640083 containerd[1593]: time="2026-01-16T18:03:40.640031391Z" level=info msg="CreateContainer within sandbox \"88eb6f811794d77b80ed9d00630c0d7be2f88145d08f106c9cb0a5dcf70e3ef8\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 16 18:03:40.656132 containerd[1593]: time="2026-01-16T18:03:40.654226984Z" level=info msg="Container 3428563c02166ec6b15b184c1fbf2d4d31cf4ace9aa876269c9fa0fc2b7d7380: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:03:40.659848 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount769161273.mount: Deactivated successfully. Jan 16 18:03:40.670863 containerd[1593]: time="2026-01-16T18:03:40.670739149Z" level=info msg="CreateContainer within sandbox \"88eb6f811794d77b80ed9d00630c0d7be2f88145d08f106c9cb0a5dcf70e3ef8\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"3428563c02166ec6b15b184c1fbf2d4d31cf4ace9aa876269c9fa0fc2b7d7380\"" Jan 16 18:03:40.672490 containerd[1593]: time="2026-01-16T18:03:40.672421882Z" level=info msg="StartContainer for \"3428563c02166ec6b15b184c1fbf2d4d31cf4ace9aa876269c9fa0fc2b7d7380\"" Jan 16 18:03:40.674788 containerd[1593]: time="2026-01-16T18:03:40.674746175Z" level=info msg="connecting to shim 3428563c02166ec6b15b184c1fbf2d4d31cf4ace9aa876269c9fa0fc2b7d7380" address="unix:///run/containerd/s/aef0ed511ffac975d2eabfb300c57ca780d0496c24e3b749d6b3fe58e0b20a11" protocol=ttrpc version=3 Jan 16 18:03:40.709356 systemd[1]: Started cri-containerd-3428563c02166ec6b15b184c1fbf2d4d31cf4ace9aa876269c9fa0fc2b7d7380.scope - libcontainer container 3428563c02166ec6b15b184c1fbf2d4d31cf4ace9aa876269c9fa0fc2b7d7380. Jan 16 18:03:40.786000 audit: BPF prog-id=164 op=LOAD Jan 16 18:03:40.786000 audit[3514]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3358 pid=3514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:40.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334323835363363303231363665633662313562313834633166626632 Jan 16 18:03:40.786000 audit: BPF prog-id=165 op=LOAD Jan 16 18:03:40.786000 audit[3514]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3358 pid=3514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:40.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334323835363363303231363665633662313562313834633166626632 Jan 16 18:03:40.786000 audit: BPF prog-id=165 op=UNLOAD Jan 16 18:03:40.786000 audit[3514]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3358 pid=3514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:40.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334323835363363303231363665633662313562313834633166626632 Jan 16 18:03:40.786000 audit: BPF prog-id=164 op=UNLOAD Jan 16 18:03:40.786000 audit[3514]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3358 pid=3514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:40.786000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334323835363363303231363665633662313562313834633166626632 Jan 16 18:03:40.787000 audit: BPF prog-id=166 op=LOAD Jan 16 18:03:40.787000 audit[3514]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3358 pid=3514 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:40.787000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3334323835363363303231363665633662313562313834633166626632 Jan 16 18:03:40.815536 containerd[1593]: time="2026-01-16T18:03:40.815386178Z" level=info msg="StartContainer for \"3428563c02166ec6b15b184c1fbf2d4d31cf4ace9aa876269c9fa0fc2b7d7380\" returns successfully" Jan 16 18:03:40.831831 systemd[1]: cri-containerd-3428563c02166ec6b15b184c1fbf2d4d31cf4ace9aa876269c9fa0fc2b7d7380.scope: Deactivated successfully. Jan 16 18:03:40.835395 containerd[1593]: time="2026-01-16T18:03:40.835236966Z" level=info msg="received container exit event container_id:\"3428563c02166ec6b15b184c1fbf2d4d31cf4ace9aa876269c9fa0fc2b7d7380\" id:\"3428563c02166ec6b15b184c1fbf2d4d31cf4ace9aa876269c9fa0fc2b7d7380\" pid:3528 exited_at:{seconds:1768586620 nanos:834548919}" Jan 16 18:03:40.835000 audit: BPF prog-id=166 op=UNLOAD Jan 16 18:03:40.864292 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3428563c02166ec6b15b184c1fbf2d4d31cf4ace9aa876269c9fa0fc2b7d7380-rootfs.mount: Deactivated successfully. Jan 16 18:03:41.101997 kubelet[2847]: E0116 18:03:41.101724 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x7tcq" podUID="c7bb61dc-6bdf-4bf7-9a33-c67b671e2820" Jan 16 18:03:41.281506 kubelet[2847]: I0116 18:03:41.281007 2847 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 16 18:03:41.282975 containerd[1593]: time="2026-01-16T18:03:41.282934554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 16 18:03:41.308912 kubelet[2847]: I0116 18:03:41.307690 2847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-cf6c97d76-bvq9q" podStartSLOduration=3.376797682 podStartE2EDuration="5.307672697s" podCreationTimestamp="2026-01-16 18:03:36 +0000 UTC" firstStartedPulling="2026-01-16 18:03:37.272569882 +0000 UTC m=+28.304490173" lastFinishedPulling="2026-01-16 18:03:39.203444857 +0000 UTC m=+30.235365188" observedRunningTime="2026-01-16 18:03:40.300725975 +0000 UTC m=+31.332646346" watchObservedRunningTime="2026-01-16 18:03:41.307672697 +0000 UTC m=+32.339593028" Jan 16 18:03:43.096963 kubelet[2847]: E0116 18:03:43.096713 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-x7tcq" podUID="c7bb61dc-6bdf-4bf7-9a33-c67b671e2820" Jan 16 18:03:43.925515 containerd[1593]: time="2026-01-16T18:03:43.925417339Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:03:43.928140 containerd[1593]: time="2026-01-16T18:03:43.928046840Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Jan 16 18:03:43.929721 containerd[1593]: time="2026-01-16T18:03:43.929633462Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:03:43.933260 containerd[1593]: time="2026-01-16T18:03:43.933188950Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:03:43.934966 containerd[1593]: time="2026-01-16T18:03:43.934849421Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.651624351s" Jan 16 18:03:43.934966 containerd[1593]: time="2026-01-16T18:03:43.934894546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 16 18:03:43.941432 containerd[1593]: time="2026-01-16T18:03:43.941383571Z" level=info msg="CreateContainer within sandbox \"88eb6f811794d77b80ed9d00630c0d7be2f88145d08f106c9cb0a5dcf70e3ef8\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 16 18:03:43.952248 containerd[1593]: time="2026-01-16T18:03:43.952204932Z" level=info msg="Container 9008986901ac3810636d6b98c79d3289f36c46f8b8e4bc06a1071ee512a93ac1: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:03:43.961492 containerd[1593]: time="2026-01-16T18:03:43.961418710Z" level=info msg="CreateContainer within sandbox \"88eb6f811794d77b80ed9d00630c0d7be2f88145d08f106c9cb0a5dcf70e3ef8\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"9008986901ac3810636d6b98c79d3289f36c46f8b8e4bc06a1071ee512a93ac1\"" Jan 16 18:03:43.962373 containerd[1593]: time="2026-01-16T18:03:43.962226442Z" level=info msg="StartContainer for \"9008986901ac3810636d6b98c79d3289f36c46f8b8e4bc06a1071ee512a93ac1\"" Jan 16 18:03:43.966412 containerd[1593]: time="2026-01-16T18:03:43.966207939Z" level=info msg="connecting to shim 9008986901ac3810636d6b98c79d3289f36c46f8b8e4bc06a1071ee512a93ac1" address="unix:///run/containerd/s/aef0ed511ffac975d2eabfb300c57ca780d0496c24e3b749d6b3fe58e0b20a11" protocol=ttrpc version=3 Jan 16 18:03:44.000367 systemd[1]: Started cri-containerd-9008986901ac3810636d6b98c79d3289f36c46f8b8e4bc06a1071ee512a93ac1.scope - libcontainer container 9008986901ac3810636d6b98c79d3289f36c46f8b8e4bc06a1071ee512a93ac1. Jan 16 18:03:44.052000 audit: BPF prog-id=167 op=LOAD Jan 16 18:03:44.052000 audit[3572]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3358 pid=3572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:44.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930303839383639303161633338313036333664366239386337396433 Jan 16 18:03:44.052000 audit: BPF prog-id=168 op=LOAD Jan 16 18:03:44.052000 audit[3572]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3358 pid=3572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:44.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930303839383639303161633338313036333664366239386337396433 Jan 16 18:03:44.052000 audit: BPF prog-id=168 op=UNLOAD Jan 16 18:03:44.052000 audit[3572]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3358 pid=3572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:44.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930303839383639303161633338313036333664366239386337396433 Jan 16 18:03:44.052000 audit: BPF prog-id=167 op=UNLOAD Jan 16 18:03:44.052000 audit[3572]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3358 pid=3572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:44.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930303839383639303161633338313036333664366239386337396433 Jan 16 18:03:44.052000 audit: BPF prog-id=169 op=LOAD Jan 16 18:03:44.052000 audit[3572]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3358 pid=3572 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:44.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3930303839383639303161633338313036333664366239386337396433 Jan 16 18:03:44.082018 containerd[1593]: time="2026-01-16T18:03:44.081973470Z" level=info msg="StartContainer for \"9008986901ac3810636d6b98c79d3289f36c46f8b8e4bc06a1071ee512a93ac1\" returns successfully" Jan 16 18:03:44.694456 containerd[1593]: time="2026-01-16T18:03:44.694406997Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 18:03:44.697790 systemd[1]: cri-containerd-9008986901ac3810636d6b98c79d3289f36c46f8b8e4bc06a1071ee512a93ac1.scope: Deactivated successfully. Jan 16 18:03:44.699083 systemd[1]: cri-containerd-9008986901ac3810636d6b98c79d3289f36c46f8b8e4bc06a1071ee512a93ac1.scope: Consumed 549ms CPU time, 189.5M memory peak, 165.9M written to disk. Jan 16 18:03:44.700000 audit: BPF prog-id=169 op=UNLOAD Jan 16 18:03:44.702181 kernel: kauditd_printk_skb: 43 callbacks suppressed Jan 16 18:03:44.702257 kernel: audit: type=1334 audit(1768586624.700:560): prog-id=169 op=UNLOAD Jan 16 18:03:44.705049 containerd[1593]: time="2026-01-16T18:03:44.704989495Z" level=info msg="received container exit event container_id:\"9008986901ac3810636d6b98c79d3289f36c46f8b8e4bc06a1071ee512a93ac1\" id:\"9008986901ac3810636d6b98c79d3289f36c46f8b8e4bc06a1071ee512a93ac1\" pid:3586 exited_at:{seconds:1768586624 nanos:704442634}" Jan 16 18:03:44.711471 kubelet[2847]: I0116 18:03:44.709491 2847 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 16 18:03:44.748377 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-9008986901ac3810636d6b98c79d3289f36c46f8b8e4bc06a1071ee512a93ac1-rootfs.mount: Deactivated successfully. Jan 16 18:03:44.845633 systemd[1]: Created slice kubepods-burstable-pod199a61b9_0789_47e3_ae0e_1db7815072b1.slice - libcontainer container kubepods-burstable-pod199a61b9_0789_47e3_ae0e_1db7815072b1.slice. Jan 16 18:03:44.863685 systemd[1]: Created slice kubepods-burstable-podc5d82fbc_6b84_4542_a0ba_6775b07ac632.slice - libcontainer container kubepods-burstable-podc5d82fbc_6b84_4542_a0ba_6775b07ac632.slice. Jan 16 18:03:44.886278 systemd[1]: Created slice kubepods-besteffort-pod803aa494_41ae_4e2d_9b86_fe41697d291d.slice - libcontainer container kubepods-besteffort-pod803aa494_41ae_4e2d_9b86_fe41697d291d.slice. Jan 16 18:03:44.897330 systemd[1]: Created slice kubepods-besteffort-podacf5809a_b3e8_41d0_86b6_edf701abdda5.slice - libcontainer container kubepods-besteffort-podacf5809a_b3e8_41d0_86b6_edf701abdda5.slice. Jan 16 18:03:44.907081 systemd[1]: Created slice kubepods-besteffort-pod8e0474a2_d39a_4579_914b_c283a50c8f89.slice - libcontainer container kubepods-besteffort-pod8e0474a2_d39a_4579_914b_c283a50c8f89.slice. Jan 16 18:03:44.916106 systemd[1]: Created slice kubepods-besteffort-pod29a43599_84fe_43df_8ccd_cc9d89aeeb36.slice - libcontainer container kubepods-besteffort-pod29a43599_84fe_43df_8ccd_cc9d89aeeb36.slice. Jan 16 18:03:44.923633 systemd[1]: Created slice kubepods-besteffort-pod324c3e73_c2dc_4ae9_8875_3415b8305668.slice - libcontainer container kubepods-besteffort-pod324c3e73_c2dc_4ae9_8875_3415b8305668.slice. Jan 16 18:03:44.984868 kubelet[2847]: I0116 18:03:44.984775 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nrhkn\" (UniqueName: \"kubernetes.io/projected/c5d82fbc-6b84-4542-a0ba-6775b07ac632-kube-api-access-nrhkn\") pod \"coredns-674b8bbfcf-pr96z\" (UID: \"c5d82fbc-6b84-4542-a0ba-6775b07ac632\") " pod="kube-system/coredns-674b8bbfcf-pr96z" Jan 16 18:03:44.984868 kubelet[2847]: I0116 18:03:44.984869 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/29a43599-84fe-43df-8ccd-cc9d89aeeb36-goldmane-ca-bundle\") pod \"goldmane-666569f655-zmqhv\" (UID: \"29a43599-84fe-43df-8ccd-cc9d89aeeb36\") " pod="calico-system/goldmane-666569f655-zmqhv" Jan 16 18:03:44.985407 kubelet[2847]: I0116 18:03:44.984915 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t4ngm\" (UniqueName: \"kubernetes.io/projected/199a61b9-0789-47e3-ae0e-1db7815072b1-kube-api-access-t4ngm\") pod \"coredns-674b8bbfcf-26vdn\" (UID: \"199a61b9-0789-47e3-ae0e-1db7815072b1\") " pod="kube-system/coredns-674b8bbfcf-26vdn" Jan 16 18:03:44.985407 kubelet[2847]: I0116 18:03:44.985077 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/803aa494-41ae-4e2d-9b86-fe41697d291d-tigera-ca-bundle\") pod \"calico-kube-controllers-7995d6d9cc-4t5v5\" (UID: \"803aa494-41ae-4e2d-9b86-fe41697d291d\") " pod="calico-system/calico-kube-controllers-7995d6d9cc-4t5v5" Jan 16 18:03:44.985407 kubelet[2847]: I0116 18:03:44.985185 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/acf5809a-b3e8-41d0-86b6-edf701abdda5-calico-apiserver-certs\") pod \"calico-apiserver-795d7bd556-lxhzl\" (UID: \"acf5809a-b3e8-41d0-86b6-edf701abdda5\") " pod="calico-apiserver/calico-apiserver-795d7bd556-lxhzl" Jan 16 18:03:44.985407 kubelet[2847]: I0116 18:03:44.985266 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7kdrc\" (UniqueName: \"kubernetes.io/projected/29a43599-84fe-43df-8ccd-cc9d89aeeb36-kube-api-access-7kdrc\") pod \"goldmane-666569f655-zmqhv\" (UID: \"29a43599-84fe-43df-8ccd-cc9d89aeeb36\") " pod="calico-system/goldmane-666569f655-zmqhv" Jan 16 18:03:44.985407 kubelet[2847]: I0116 18:03:44.985305 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ft29j\" (UniqueName: \"kubernetes.io/projected/803aa494-41ae-4e2d-9b86-fe41697d291d-kube-api-access-ft29j\") pod \"calico-kube-controllers-7995d6d9cc-4t5v5\" (UID: \"803aa494-41ae-4e2d-9b86-fe41697d291d\") " pod="calico-system/calico-kube-controllers-7995d6d9cc-4t5v5" Jan 16 18:03:44.985860 kubelet[2847]: I0116 18:03:44.985383 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tt8wg\" (UniqueName: \"kubernetes.io/projected/324c3e73-c2dc-4ae9-8875-3415b8305668-kube-api-access-tt8wg\") pod \"calico-apiserver-795d7bd556-2wqnw\" (UID: \"324c3e73-c2dc-4ae9-8875-3415b8305668\") " pod="calico-apiserver/calico-apiserver-795d7bd556-2wqnw" Jan 16 18:03:44.985860 kubelet[2847]: I0116 18:03:44.985502 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/c5d82fbc-6b84-4542-a0ba-6775b07ac632-config-volume\") pod \"coredns-674b8bbfcf-pr96z\" (UID: \"c5d82fbc-6b84-4542-a0ba-6775b07ac632\") " pod="kube-system/coredns-674b8bbfcf-pr96z" Jan 16 18:03:44.985860 kubelet[2847]: I0116 18:03:44.985582 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/29a43599-84fe-43df-8ccd-cc9d89aeeb36-goldmane-key-pair\") pod \"goldmane-666569f655-zmqhv\" (UID: \"29a43599-84fe-43df-8ccd-cc9d89aeeb36\") " pod="calico-system/goldmane-666569f655-zmqhv" Jan 16 18:03:44.985860 kubelet[2847]: I0116 18:03:44.985656 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5x7th\" (UniqueName: \"kubernetes.io/projected/acf5809a-b3e8-41d0-86b6-edf701abdda5-kube-api-access-5x7th\") pod \"calico-apiserver-795d7bd556-lxhzl\" (UID: \"acf5809a-b3e8-41d0-86b6-edf701abdda5\") " pod="calico-apiserver/calico-apiserver-795d7bd556-lxhzl" Jan 16 18:03:44.985860 kubelet[2847]: I0116 18:03:44.985759 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/29a43599-84fe-43df-8ccd-cc9d89aeeb36-config\") pod \"goldmane-666569f655-zmqhv\" (UID: \"29a43599-84fe-43df-8ccd-cc9d89aeeb36\") " pod="calico-system/goldmane-666569f655-zmqhv" Jan 16 18:03:44.986296 kubelet[2847]: I0116 18:03:44.985797 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/199a61b9-0789-47e3-ae0e-1db7815072b1-config-volume\") pod \"coredns-674b8bbfcf-26vdn\" (UID: \"199a61b9-0789-47e3-ae0e-1db7815072b1\") " pod="kube-system/coredns-674b8bbfcf-26vdn" Jan 16 18:03:44.986296 kubelet[2847]: I0116 18:03:44.985837 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/324c3e73-c2dc-4ae9-8875-3415b8305668-calico-apiserver-certs\") pod \"calico-apiserver-795d7bd556-2wqnw\" (UID: \"324c3e73-c2dc-4ae9-8875-3415b8305668\") " pod="calico-apiserver/calico-apiserver-795d7bd556-2wqnw" Jan 16 18:03:44.986296 kubelet[2847]: I0116 18:03:44.985888 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e0474a2-d39a-4579-914b-c283a50c8f89-whisker-ca-bundle\") pod \"whisker-6bc869f5f4-j7cbq\" (UID: \"8e0474a2-d39a-4579-914b-c283a50c8f89\") " pod="calico-system/whisker-6bc869f5f4-j7cbq" Jan 16 18:03:44.986296 kubelet[2847]: I0116 18:03:44.985931 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8e0474a2-d39a-4579-914b-c283a50c8f89-whisker-backend-key-pair\") pod \"whisker-6bc869f5f4-j7cbq\" (UID: \"8e0474a2-d39a-4579-914b-c283a50c8f89\") " pod="calico-system/whisker-6bc869f5f4-j7cbq" Jan 16 18:03:44.986296 kubelet[2847]: I0116 18:03:44.985997 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wrkn2\" (UniqueName: \"kubernetes.io/projected/8e0474a2-d39a-4579-914b-c283a50c8f89-kube-api-access-wrkn2\") pod \"whisker-6bc869f5f4-j7cbq\" (UID: \"8e0474a2-d39a-4579-914b-c283a50c8f89\") " pod="calico-system/whisker-6bc869f5f4-j7cbq" Jan 16 18:03:45.157004 containerd[1593]: time="2026-01-16T18:03:45.156848604Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-26vdn,Uid:199a61b9-0789-47e3-ae0e-1db7815072b1,Namespace:kube-system,Attempt:0,}" Jan 16 18:03:45.168797 systemd[1]: Created slice kubepods-besteffort-podc7bb61dc_6bdf_4bf7_9a33_c67b671e2820.slice - libcontainer container kubepods-besteffort-podc7bb61dc_6bdf_4bf7_9a33_c67b671e2820.slice. Jan 16 18:03:45.173670 containerd[1593]: time="2026-01-16T18:03:45.173603577Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x7tcq,Uid:c7bb61dc-6bdf-4bf7-9a33-c67b671e2820,Namespace:calico-system,Attempt:0,}" Jan 16 18:03:45.181364 containerd[1593]: time="2026-01-16T18:03:45.181206800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pr96z,Uid:c5d82fbc-6b84-4542-a0ba-6775b07ac632,Namespace:kube-system,Attempt:0,}" Jan 16 18:03:45.193452 containerd[1593]: time="2026-01-16T18:03:45.192937589Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7995d6d9cc-4t5v5,Uid:803aa494-41ae-4e2d-9b86-fe41697d291d,Namespace:calico-system,Attempt:0,}" Jan 16 18:03:45.204246 containerd[1593]: time="2026-01-16T18:03:45.204207289Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-795d7bd556-lxhzl,Uid:acf5809a-b3e8-41d0-86b6-edf701abdda5,Namespace:calico-apiserver,Attempt:0,}" Jan 16 18:03:45.214768 containerd[1593]: time="2026-01-16T18:03:45.214727747Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bc869f5f4-j7cbq,Uid:8e0474a2-d39a-4579-914b-c283a50c8f89,Namespace:calico-system,Attempt:0,}" Jan 16 18:03:45.223521 containerd[1593]: time="2026-01-16T18:03:45.223427208Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-zmqhv,Uid:29a43599-84fe-43df-8ccd-cc9d89aeeb36,Namespace:calico-system,Attempt:0,}" Jan 16 18:03:45.229899 containerd[1593]: time="2026-01-16T18:03:45.229823460Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-795d7bd556-2wqnw,Uid:324c3e73-c2dc-4ae9-8875-3415b8305668,Namespace:calico-apiserver,Attempt:0,}" Jan 16 18:03:45.322306 containerd[1593]: time="2026-01-16T18:03:45.322191495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 16 18:03:45.408695 containerd[1593]: time="2026-01-16T18:03:45.408645449Z" level=error msg="Failed to destroy network for sandbox \"7452c2676223a5d02a921c46f29d4a6ca2c837926cf31294f4395d2eed46c887\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:03:45.409154 containerd[1593]: time="2026-01-16T18:03:45.409111580Z" level=error msg="Failed to destroy network for sandbox \"6417dbbc8b70f0be317f281a268d418e116ee696787e616e36bb62165ccd8051\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:03:45.414461 containerd[1593]: time="2026-01-16T18:03:45.414404552Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pr96z,Uid:c5d82fbc-6b84-4542-a0ba-6775b07ac632,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7452c2676223a5d02a921c46f29d4a6ca2c837926cf31294f4395d2eed46c887\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:03:45.415386 kubelet[2847]: E0116 18:03:45.415326 2847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7452c2676223a5d02a921c46f29d4a6ca2c837926cf31294f4395d2eed46c887\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:03:45.415503 kubelet[2847]: E0116 18:03:45.415412 2847 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7452c2676223a5d02a921c46f29d4a6ca2c837926cf31294f4395d2eed46c887\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pr96z" Jan 16 18:03:45.415503 kubelet[2847]: E0116 18:03:45.415436 2847 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7452c2676223a5d02a921c46f29d4a6ca2c837926cf31294f4395d2eed46c887\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-pr96z" Jan 16 18:03:45.418056 kubelet[2847]: E0116 18:03:45.415507 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-pr96z_kube-system(c5d82fbc-6b84-4542-a0ba-6775b07ac632)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-pr96z_kube-system(c5d82fbc-6b84-4542-a0ba-6775b07ac632)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7452c2676223a5d02a921c46f29d4a6ca2c837926cf31294f4395d2eed46c887\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-pr96z" podUID="c5d82fbc-6b84-4542-a0ba-6775b07ac632" Jan 16 18:03:45.423616 containerd[1593]: time="2026-01-16T18:03:45.423548702Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-26vdn,Uid:199a61b9-0789-47e3-ae0e-1db7815072b1,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6417dbbc8b70f0be317f281a268d418e116ee696787e616e36bb62165ccd8051\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:03:45.424423 kubelet[2847]: E0116 18:03:45.423929 2847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6417dbbc8b70f0be317f281a268d418e116ee696787e616e36bb62165ccd8051\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:03:45.424423 kubelet[2847]: E0116 18:03:45.424043 2847 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6417dbbc8b70f0be317f281a268d418e116ee696787e616e36bb62165ccd8051\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-26vdn" Jan 16 18:03:45.424423 kubelet[2847]: E0116 18:03:45.424066 2847 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6417dbbc8b70f0be317f281a268d418e116ee696787e616e36bb62165ccd8051\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-26vdn" Jan 16 18:03:45.424633 kubelet[2847]: E0116 18:03:45.424227 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-26vdn_kube-system(199a61b9-0789-47e3-ae0e-1db7815072b1)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-26vdn_kube-system(199a61b9-0789-47e3-ae0e-1db7815072b1)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6417dbbc8b70f0be317f281a268d418e116ee696787e616e36bb62165ccd8051\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-26vdn" podUID="199a61b9-0789-47e3-ae0e-1db7815072b1" Jan 16 18:03:45.448257 containerd[1593]: time="2026-01-16T18:03:45.448182087Z" level=error msg="Failed to destroy network for sandbox \"ac6d016439d5fb2e30bb23a3788126fc9e93251fa94cea4401749d8447ff775b\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:03:45.451471 containerd[1593]: time="2026-01-16T18:03:45.451400675Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x7tcq,Uid:c7bb61dc-6bdf-4bf7-9a33-c67b671e2820,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac6d016439d5fb2e30bb23a3788126fc9e93251fa94cea4401749d8447ff775b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:03:45.451978 kubelet[2847]: E0116 18:03:45.451913 2847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac6d016439d5fb2e30bb23a3788126fc9e93251fa94cea4401749d8447ff775b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:03:45.452206 kubelet[2847]: E0116 18:03:45.452145 2847 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac6d016439d5fb2e30bb23a3788126fc9e93251fa94cea4401749d8447ff775b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x7tcq" Jan 16 18:03:45.452251 kubelet[2847]: E0116 18:03:45.452207 2847 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"ac6d016439d5fb2e30bb23a3788126fc9e93251fa94cea4401749d8447ff775b\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-x7tcq" Jan 16 18:03:45.452507 kubelet[2847]: E0116 18:03:45.452468 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-x7tcq_calico-system(c7bb61dc-6bdf-4bf7-9a33-c67b671e2820)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-x7tcq_calico-system(c7bb61dc-6bdf-4bf7-9a33-c67b671e2820)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"ac6d016439d5fb2e30bb23a3788126fc9e93251fa94cea4401749d8447ff775b\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-x7tcq" podUID="c7bb61dc-6bdf-4bf7-9a33-c67b671e2820" Jan 16 18:03:45.460105 containerd[1593]: time="2026-01-16T18:03:45.459997646Z" level=error msg="Failed to destroy network for sandbox \"f22f65503aac6f5b52a93304a2731a821362c06fee68a44ace31d81a96101037\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:03:45.463128 containerd[1593]: time="2026-01-16T18:03:45.463047696Z" level=error msg="Failed to destroy network for sandbox \"64558595ef167b7aa705c9f29beae616766ce2a343dee363502d0d6db91eb454\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:03:45.463716 containerd[1593]: time="2026-01-16T18:03:45.463595075Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7995d6d9cc-4t5v5,Uid:803aa494-41ae-4e2d-9b86-fe41697d291d,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"f22f65503aac6f5b52a93304a2731a821362c06fee68a44ace31d81a96101037\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:03:45.464347 kubelet[2847]: E0116 18:03:45.464063 2847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f22f65503aac6f5b52a93304a2731a821362c06fee68a44ace31d81a96101037\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:03:45.464347 kubelet[2847]: E0116 18:03:45.464127 2847 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f22f65503aac6f5b52a93304a2731a821362c06fee68a44ace31d81a96101037\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7995d6d9cc-4t5v5" Jan 16 18:03:45.464347 kubelet[2847]: E0116 18:03:45.464147 2847 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"f22f65503aac6f5b52a93304a2731a821362c06fee68a44ace31d81a96101037\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-7995d6d9cc-4t5v5" Jan 16 18:03:45.464542 kubelet[2847]: E0116 18:03:45.464203 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-7995d6d9cc-4t5v5_calico-system(803aa494-41ae-4e2d-9b86-fe41697d291d)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-7995d6d9cc-4t5v5_calico-system(803aa494-41ae-4e2d-9b86-fe41697d291d)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"f22f65503aac6f5b52a93304a2731a821362c06fee68a44ace31d81a96101037\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-7995d6d9cc-4t5v5" podUID="803aa494-41ae-4e2d-9b86-fe41697d291d" Jan 16 18:03:45.468110 containerd[1593]: time="2026-01-16T18:03:45.468055517Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-795d7bd556-lxhzl,Uid:acf5809a-b3e8-41d0-86b6-edf701abdda5,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"64558595ef167b7aa705c9f29beae616766ce2a343dee363502d0d6db91eb454\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:03:45.468993 kubelet[2847]: E0116 18:03:45.468905 2847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64558595ef167b7aa705c9f29beae616766ce2a343dee363502d0d6db91eb454\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:03:45.468993 kubelet[2847]: E0116 18:03:45.468990 2847 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64558595ef167b7aa705c9f29beae616766ce2a343dee363502d0d6db91eb454\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-795d7bd556-lxhzl" Jan 16 18:03:45.468993 kubelet[2847]: E0116 18:03:45.469022 2847 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"64558595ef167b7aa705c9f29beae616766ce2a343dee363502d0d6db91eb454\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-795d7bd556-lxhzl" Jan 16 18:03:45.469173 kubelet[2847]: E0116 18:03:45.469095 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-795d7bd556-lxhzl_calico-apiserver(acf5809a-b3e8-41d0-86b6-edf701abdda5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-795d7bd556-lxhzl_calico-apiserver(acf5809a-b3e8-41d0-86b6-edf701abdda5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"64558595ef167b7aa705c9f29beae616766ce2a343dee363502d0d6db91eb454\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-795d7bd556-lxhzl" podUID="acf5809a-b3e8-41d0-86b6-edf701abdda5" Jan 16 18:03:45.487453 containerd[1593]: time="2026-01-16T18:03:45.487382169Z" level=error msg="Failed to destroy network for sandbox \"b2509f8a11baeb539c740cd5ff734f31d5b0ac1e3c307cba049e24511ea1a30e\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:03:45.490026 containerd[1593]: time="2026-01-16T18:03:45.489892000Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-795d7bd556-2wqnw,Uid:324c3e73-c2dc-4ae9-8875-3415b8305668,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2509f8a11baeb539c740cd5ff734f31d5b0ac1e3c307cba049e24511ea1a30e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:03:45.491092 kubelet[2847]: E0116 18:03:45.490993 2847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2509f8a11baeb539c740cd5ff734f31d5b0ac1e3c307cba049e24511ea1a30e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:03:45.491533 kubelet[2847]: E0116 18:03:45.491111 2847 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2509f8a11baeb539c740cd5ff734f31d5b0ac1e3c307cba049e24511ea1a30e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-795d7bd556-2wqnw" Jan 16 18:03:45.491533 kubelet[2847]: E0116 18:03:45.491132 2847 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"b2509f8a11baeb539c740cd5ff734f31d5b0ac1e3c307cba049e24511ea1a30e\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-795d7bd556-2wqnw" Jan 16 18:03:45.491533 kubelet[2847]: E0116 18:03:45.491304 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-795d7bd556-2wqnw_calico-apiserver(324c3e73-c2dc-4ae9-8875-3415b8305668)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-795d7bd556-2wqnw_calico-apiserver(324c3e73-c2dc-4ae9-8875-3415b8305668)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"b2509f8a11baeb539c740cd5ff734f31d5b0ac1e3c307cba049e24511ea1a30e\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-795d7bd556-2wqnw" podUID="324c3e73-c2dc-4ae9-8875-3415b8305668" Jan 16 18:03:45.493064 containerd[1593]: time="2026-01-16T18:03:45.492996056Z" level=error msg="Failed to destroy network for sandbox \"d7a72f5a36f42397bc26b33afb13ffb9bdbe3952afb87dcffd03e52576be9f06\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:03:45.499542 containerd[1593]: time="2026-01-16T18:03:45.499457515Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-6bc869f5f4-j7cbq,Uid:8e0474a2-d39a-4579-914b-c283a50c8f89,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7a72f5a36f42397bc26b33afb13ffb9bdbe3952afb87dcffd03e52576be9f06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:03:45.499941 kubelet[2847]: E0116 18:03:45.499888 2847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7a72f5a36f42397bc26b33afb13ffb9bdbe3952afb87dcffd03e52576be9f06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:03:45.500205 kubelet[2847]: E0116 18:03:45.500179 2847 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7a72f5a36f42397bc26b33afb13ffb9bdbe3952afb87dcffd03e52576be9f06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6bc869f5f4-j7cbq" Jan 16 18:03:45.500271 kubelet[2847]: E0116 18:03:45.500210 2847 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"d7a72f5a36f42397bc26b33afb13ffb9bdbe3952afb87dcffd03e52576be9f06\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-6bc869f5f4-j7cbq" Jan 16 18:03:45.500384 kubelet[2847]: E0116 18:03:45.500350 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-6bc869f5f4-j7cbq_calico-system(8e0474a2-d39a-4579-914b-c283a50c8f89)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-6bc869f5f4-j7cbq_calico-system(8e0474a2-d39a-4579-914b-c283a50c8f89)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"d7a72f5a36f42397bc26b33afb13ffb9bdbe3952afb87dcffd03e52576be9f06\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-6bc869f5f4-j7cbq" podUID="8e0474a2-d39a-4579-914b-c283a50c8f89" Jan 16 18:03:45.504343 containerd[1593]: time="2026-01-16T18:03:45.504195628Z" level=error msg="Failed to destroy network for sandbox \"73134d7355d88098f817c599ede8cf8f1924835995b177def9dfcc8823e4f8e1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:03:45.511397 containerd[1593]: time="2026-01-16T18:03:45.511316038Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-zmqhv,Uid:29a43599-84fe-43df-8ccd-cc9d89aeeb36,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"73134d7355d88098f817c599ede8cf8f1924835995b177def9dfcc8823e4f8e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:03:45.512176 kubelet[2847]: E0116 18:03:45.511995 2847 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73134d7355d88098f817c599ede8cf8f1924835995b177def9dfcc8823e4f8e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:03:45.512176 kubelet[2847]: E0116 18:03:45.512091 2847 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73134d7355d88098f817c599ede8cf8f1924835995b177def9dfcc8823e4f8e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-zmqhv" Jan 16 18:03:45.512176 kubelet[2847]: E0116 18:03:45.512119 2847 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73134d7355d88098f817c599ede8cf8f1924835995b177def9dfcc8823e4f8e1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-zmqhv" Jan 16 18:03:45.512467 kubelet[2847]: E0116 18:03:45.512419 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-zmqhv_calico-system(29a43599-84fe-43df-8ccd-cc9d89aeeb36)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-zmqhv_calico-system(29a43599-84fe-43df-8ccd-cc9d89aeeb36)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73134d7355d88098f817c599ede8cf8f1924835995b177def9dfcc8823e4f8e1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-zmqhv" podUID="29a43599-84fe-43df-8ccd-cc9d89aeeb36" Jan 16 18:03:50.269864 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3434588177.mount: Deactivated successfully. Jan 16 18:03:50.292750 containerd[1593]: time="2026-01-16T18:03:50.292680499Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:03:50.294783 containerd[1593]: time="2026-01-16T18:03:50.294536355Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Jan 16 18:03:50.296391 containerd[1593]: time="2026-01-16T18:03:50.296346767Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:03:50.300155 containerd[1593]: time="2026-01-16T18:03:50.299154354Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:03:50.300155 containerd[1593]: time="2026-01-16T18:03:50.299832899Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.977181195s" Jan 16 18:03:50.300155 containerd[1593]: time="2026-01-16T18:03:50.299862861Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 16 18:03:50.327754 containerd[1593]: time="2026-01-16T18:03:50.327713270Z" level=info msg="CreateContainer within sandbox \"88eb6f811794d77b80ed9d00630c0d7be2f88145d08f106c9cb0a5dcf70e3ef8\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 16 18:03:50.351979 containerd[1593]: time="2026-01-16T18:03:50.350605487Z" level=info msg="Container 09c1431c5c3b10597a9cbcb3bd7346d0285bc856c97f881d5f8e7e766e849111: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:03:50.366798 containerd[1593]: time="2026-01-16T18:03:50.366708618Z" level=info msg="CreateContainer within sandbox \"88eb6f811794d77b80ed9d00630c0d7be2f88145d08f106c9cb0a5dcf70e3ef8\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"09c1431c5c3b10597a9cbcb3bd7346d0285bc856c97f881d5f8e7e766e849111\"" Jan 16 18:03:50.368407 containerd[1593]: time="2026-01-16T18:03:50.368070268Z" level=info msg="StartContainer for \"09c1431c5c3b10597a9cbcb3bd7346d0285bc856c97f881d5f8e7e766e849111\"" Jan 16 18:03:50.370553 containerd[1593]: time="2026-01-16T18:03:50.370509379Z" level=info msg="connecting to shim 09c1431c5c3b10597a9cbcb3bd7346d0285bc856c97f881d5f8e7e766e849111" address="unix:///run/containerd/s/aef0ed511ffac975d2eabfb300c57ca780d0496c24e3b749d6b3fe58e0b20a11" protocol=ttrpc version=3 Jan 16 18:03:50.427307 systemd[1]: Started cri-containerd-09c1431c5c3b10597a9cbcb3bd7346d0285bc856c97f881d5f8e7e766e849111.scope - libcontainer container 09c1431c5c3b10597a9cbcb3bd7346d0285bc856c97f881d5f8e7e766e849111. Jan 16 18:03:50.490466 kernel: audit: type=1334 audit(1768586630.485:561): prog-id=170 op=LOAD Jan 16 18:03:50.490638 kernel: audit: type=1300 audit(1768586630.485:561): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3358 pid=3838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:50.485000 audit: BPF prog-id=170 op=LOAD Jan 16 18:03:50.485000 audit[3838]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3358 pid=3838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:50.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039633134333163356333623130353937613963626362336264373334 Jan 16 18:03:50.493838 kernel: audit: type=1327 audit(1768586630.485:561): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039633134333163356333623130353937613963626362336264373334 Jan 16 18:03:50.485000 audit: BPF prog-id=171 op=LOAD Jan 16 18:03:50.485000 audit[3838]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3358 pid=3838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:50.495086 kernel: audit: type=1334 audit(1768586630.485:562): prog-id=171 op=LOAD Jan 16 18:03:50.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039633134333163356333623130353937613963626362336264373334 Jan 16 18:03:50.499903 kernel: audit: type=1300 audit(1768586630.485:562): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3358 pid=3838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:50.500018 kernel: audit: type=1327 audit(1768586630.485:562): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039633134333163356333623130353937613963626362336264373334 Jan 16 18:03:50.487000 audit: BPF prog-id=171 op=UNLOAD Jan 16 18:03:50.487000 audit[3838]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3358 pid=3838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:50.501162 kernel: audit: type=1334 audit(1768586630.487:563): prog-id=171 op=UNLOAD Jan 16 18:03:50.505918 kernel: audit: type=1300 audit(1768586630.487:563): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3358 pid=3838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:50.506091 kernel: audit: type=1327 audit(1768586630.487:563): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039633134333163356333623130353937613963626362336264373334 Jan 16 18:03:50.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039633134333163356333623130353937613963626362336264373334 Jan 16 18:03:50.487000 audit: BPF prog-id=170 op=UNLOAD Jan 16 18:03:50.487000 audit[3838]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3358 pid=3838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:50.506978 kernel: audit: type=1334 audit(1768586630.487:564): prog-id=170 op=UNLOAD Jan 16 18:03:50.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039633134333163356333623130353937613963626362336264373334 Jan 16 18:03:50.487000 audit: BPF prog-id=172 op=LOAD Jan 16 18:03:50.487000 audit[3838]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3358 pid=3838 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:50.487000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039633134333163356333623130353937613963626362336264373334 Jan 16 18:03:50.529385 containerd[1593]: time="2026-01-16T18:03:50.529255115Z" level=info msg="StartContainer for \"09c1431c5c3b10597a9cbcb3bd7346d0285bc856c97f881d5f8e7e766e849111\" returns successfully" Jan 16 18:03:50.709214 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 16 18:03:50.709388 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 16 18:03:51.039969 kubelet[2847]: I0116 18:03:51.039355 2847 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8e0474a2-d39a-4579-914b-c283a50c8f89-whisker-backend-key-pair\") pod \"8e0474a2-d39a-4579-914b-c283a50c8f89\" (UID: \"8e0474a2-d39a-4579-914b-c283a50c8f89\") " Jan 16 18:03:51.039969 kubelet[2847]: I0116 18:03:51.039468 2847 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wrkn2\" (UniqueName: \"kubernetes.io/projected/8e0474a2-d39a-4579-914b-c283a50c8f89-kube-api-access-wrkn2\") pod \"8e0474a2-d39a-4579-914b-c283a50c8f89\" (UID: \"8e0474a2-d39a-4579-914b-c283a50c8f89\") " Jan 16 18:03:51.039969 kubelet[2847]: I0116 18:03:51.039492 2847 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e0474a2-d39a-4579-914b-c283a50c8f89-whisker-ca-bundle\") pod \"8e0474a2-d39a-4579-914b-c283a50c8f89\" (UID: \"8e0474a2-d39a-4579-914b-c283a50c8f89\") " Jan 16 18:03:51.042668 kubelet[2847]: I0116 18:03:51.041771 2847 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8e0474a2-d39a-4579-914b-c283a50c8f89-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "8e0474a2-d39a-4579-914b-c283a50c8f89" (UID: "8e0474a2-d39a-4579-914b-c283a50c8f89"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 16 18:03:51.048268 kubelet[2847]: I0116 18:03:51.048210 2847 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8e0474a2-d39a-4579-914b-c283a50c8f89-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "8e0474a2-d39a-4579-914b-c283a50c8f89" (UID: "8e0474a2-d39a-4579-914b-c283a50c8f89"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 16 18:03:51.048504 kubelet[2847]: I0116 18:03:51.048339 2847 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8e0474a2-d39a-4579-914b-c283a50c8f89-kube-api-access-wrkn2" (OuterVolumeSpecName: "kube-api-access-wrkn2") pod "8e0474a2-d39a-4579-914b-c283a50c8f89" (UID: "8e0474a2-d39a-4579-914b-c283a50c8f89"). InnerVolumeSpecName "kube-api-access-wrkn2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 16 18:03:51.134038 systemd[1]: Removed slice kubepods-besteffort-pod8e0474a2_d39a_4579_914b_c283a50c8f89.slice - libcontainer container kubepods-besteffort-pod8e0474a2_d39a_4579_914b_c283a50c8f89.slice. Jan 16 18:03:51.140298 kubelet[2847]: I0116 18:03:51.140251 2847 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wrkn2\" (UniqueName: \"kubernetes.io/projected/8e0474a2-d39a-4579-914b-c283a50c8f89-kube-api-access-wrkn2\") on node \"ci-4580-0-0-p-f44e0c3b96\" DevicePath \"\"" Jan 16 18:03:51.140537 kubelet[2847]: I0116 18:03:51.140523 2847 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e0474a2-d39a-4579-914b-c283a50c8f89-whisker-ca-bundle\") on node \"ci-4580-0-0-p-f44e0c3b96\" DevicePath \"\"" Jan 16 18:03:51.140646 kubelet[2847]: I0116 18:03:51.140634 2847 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/8e0474a2-d39a-4579-914b-c283a50c8f89-whisker-backend-key-pair\") on node \"ci-4580-0-0-p-f44e0c3b96\" DevicePath \"\"" Jan 16 18:03:51.271259 systemd[1]: var-lib-kubelet-pods-8e0474a2\x2dd39a\x2d4579\x2d914b\x2dc283a50c8f89-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dwrkn2.mount: Deactivated successfully. Jan 16 18:03:51.271654 systemd[1]: var-lib-kubelet-pods-8e0474a2\x2dd39a\x2d4579\x2d914b\x2dc283a50c8f89-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 16 18:03:51.409555 kubelet[2847]: I0116 18:03:51.409255 2847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-6fm7v" podStartSLOduration=2.443447343 podStartE2EDuration="15.409148909s" podCreationTimestamp="2026-01-16 18:03:36 +0000 UTC" firstStartedPulling="2026-01-16 18:03:37.335281162 +0000 UTC m=+28.367201453" lastFinishedPulling="2026-01-16 18:03:50.300982728 +0000 UTC m=+41.332903019" observedRunningTime="2026-01-16 18:03:51.391739931 +0000 UTC m=+42.423660262" watchObservedRunningTime="2026-01-16 18:03:51.409148909 +0000 UTC m=+42.441069280" Jan 16 18:03:51.492241 systemd[1]: Created slice kubepods-besteffort-pod76ab8953_d1f7_498a_b509_62959852b74e.slice - libcontainer container kubepods-besteffort-pod76ab8953_d1f7_498a_b509_62959852b74e.slice. Jan 16 18:03:51.645312 kubelet[2847]: I0116 18:03:51.645187 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/76ab8953-d1f7-498a-b509-62959852b74e-whisker-backend-key-pair\") pod \"whisker-767775565b-z2w97\" (UID: \"76ab8953-d1f7-498a-b509-62959852b74e\") " pod="calico-system/whisker-767775565b-z2w97" Jan 16 18:03:51.645730 kubelet[2847]: I0116 18:03:51.645572 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/76ab8953-d1f7-498a-b509-62959852b74e-whisker-ca-bundle\") pod \"whisker-767775565b-z2w97\" (UID: \"76ab8953-d1f7-498a-b509-62959852b74e\") " pod="calico-system/whisker-767775565b-z2w97" Jan 16 18:03:51.645985 kubelet[2847]: I0116 18:03:51.645859 2847 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-27htv\" (UniqueName: \"kubernetes.io/projected/76ab8953-d1f7-498a-b509-62959852b74e-kube-api-access-27htv\") pod \"whisker-767775565b-z2w97\" (UID: \"76ab8953-d1f7-498a-b509-62959852b74e\") " pod="calico-system/whisker-767775565b-z2w97" Jan 16 18:03:51.798367 containerd[1593]: time="2026-01-16T18:03:51.798305757Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-767775565b-z2w97,Uid:76ab8953-d1f7-498a-b509-62959852b74e,Namespace:calico-system,Attempt:0,}" Jan 16 18:03:52.016135 systemd-networkd[1472]: calib9c649c3bf8: Link UP Jan 16 18:03:52.016362 systemd-networkd[1472]: calib9c649c3bf8: Gained carrier Jan 16 18:03:52.037704 containerd[1593]: 2026-01-16 18:03:51.829 [INFO][3931] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 18:03:52.037704 containerd[1593]: 2026-01-16 18:03:51.899 [INFO][3931] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4580--0--0--p--f44e0c3b96-k8s-whisker--767775565b--z2w97-eth0 whisker-767775565b- calico-system 76ab8953-d1f7-498a-b509-62959852b74e 920 0 2026-01-16 18:03:51 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:767775565b projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4580-0-0-p-f44e0c3b96 whisker-767775565b-z2w97 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] calib9c649c3bf8 [] [] }} ContainerID="1b51617148e5947c2d3f49b6edf98d629fdb84b392259e456b1dd2db5f6796c4" Namespace="calico-system" Pod="whisker-767775565b-z2w97" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-whisker--767775565b--z2w97-" Jan 16 18:03:52.037704 containerd[1593]: 2026-01-16 18:03:51.900 [INFO][3931] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1b51617148e5947c2d3f49b6edf98d629fdb84b392259e456b1dd2db5f6796c4" Namespace="calico-system" Pod="whisker-767775565b-z2w97" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-whisker--767775565b--z2w97-eth0" Jan 16 18:03:52.037704 containerd[1593]: 2026-01-16 18:03:51.946 [INFO][3941] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1b51617148e5947c2d3f49b6edf98d629fdb84b392259e456b1dd2db5f6796c4" HandleID="k8s-pod-network.1b51617148e5947c2d3f49b6edf98d629fdb84b392259e456b1dd2db5f6796c4" Workload="ci--4580--0--0--p--f44e0c3b96-k8s-whisker--767775565b--z2w97-eth0" Jan 16 18:03:52.038026 containerd[1593]: 2026-01-16 18:03:51.946 [INFO][3941] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1b51617148e5947c2d3f49b6edf98d629fdb84b392259e456b1dd2db5f6796c4" HandleID="k8s-pod-network.1b51617148e5947c2d3f49b6edf98d629fdb84b392259e456b1dd2db5f6796c4" Workload="ci--4580--0--0--p--f44e0c3b96-k8s-whisker--767775565b--z2w97-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c1010), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4580-0-0-p-f44e0c3b96", "pod":"whisker-767775565b-z2w97", "timestamp":"2026-01-16 18:03:51.946295632 +0000 UTC"}, Hostname:"ci-4580-0-0-p-f44e0c3b96", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 18:03:52.038026 containerd[1593]: 2026-01-16 18:03:51.946 [INFO][3941] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 18:03:52.038026 containerd[1593]: 2026-01-16 18:03:51.946 [INFO][3941] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 18:03:52.038026 containerd[1593]: 2026-01-16 18:03:51.946 [INFO][3941] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4580-0-0-p-f44e0c3b96' Jan 16 18:03:52.038026 containerd[1593]: 2026-01-16 18:03:51.960 [INFO][3941] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1b51617148e5947c2d3f49b6edf98d629fdb84b392259e456b1dd2db5f6796c4" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:52.038026 containerd[1593]: 2026-01-16 18:03:51.967 [INFO][3941] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:52.038026 containerd[1593]: 2026-01-16 18:03:51.974 [INFO][3941] ipam/ipam.go 511: Trying affinity for 192.168.28.0/26 host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:52.038026 containerd[1593]: 2026-01-16 18:03:51.977 [INFO][3941] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.0/26 host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:52.038026 containerd[1593]: 2026-01-16 18:03:51.980 [INFO][3941] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.0/26 host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:52.038436 containerd[1593]: 2026-01-16 18:03:51.980 [INFO][3941] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.28.0/26 handle="k8s-pod-network.1b51617148e5947c2d3f49b6edf98d629fdb84b392259e456b1dd2db5f6796c4" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:52.038436 containerd[1593]: 2026-01-16 18:03:51.984 [INFO][3941] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1b51617148e5947c2d3f49b6edf98d629fdb84b392259e456b1dd2db5f6796c4 Jan 16 18:03:52.038436 containerd[1593]: 2026-01-16 18:03:51.990 [INFO][3941] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.28.0/26 handle="k8s-pod-network.1b51617148e5947c2d3f49b6edf98d629fdb84b392259e456b1dd2db5f6796c4" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:52.038436 containerd[1593]: 2026-01-16 18:03:51.999 [INFO][3941] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.28.1/26] block=192.168.28.0/26 handle="k8s-pod-network.1b51617148e5947c2d3f49b6edf98d629fdb84b392259e456b1dd2db5f6796c4" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:52.038436 containerd[1593]: 2026-01-16 18:03:51.999 [INFO][3941] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.1/26] handle="k8s-pod-network.1b51617148e5947c2d3f49b6edf98d629fdb84b392259e456b1dd2db5f6796c4" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:52.038436 containerd[1593]: 2026-01-16 18:03:51.999 [INFO][3941] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 18:03:52.038436 containerd[1593]: 2026-01-16 18:03:51.999 [INFO][3941] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.28.1/26] IPv6=[] ContainerID="1b51617148e5947c2d3f49b6edf98d629fdb84b392259e456b1dd2db5f6796c4" HandleID="k8s-pod-network.1b51617148e5947c2d3f49b6edf98d629fdb84b392259e456b1dd2db5f6796c4" Workload="ci--4580--0--0--p--f44e0c3b96-k8s-whisker--767775565b--z2w97-eth0" Jan 16 18:03:52.038669 containerd[1593]: 2026-01-16 18:03:52.004 [INFO][3931] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1b51617148e5947c2d3f49b6edf98d629fdb84b392259e456b1dd2db5f6796c4" Namespace="calico-system" Pod="whisker-767775565b-z2w97" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-whisker--767775565b--z2w97-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580--0--0--p--f44e0c3b96-k8s-whisker--767775565b--z2w97-eth0", GenerateName:"whisker-767775565b-", Namespace:"calico-system", SelfLink:"", UID:"76ab8953-d1f7-498a-b509-62959852b74e", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 3, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"767775565b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580-0-0-p-f44e0c3b96", ContainerID:"", Pod:"whisker-767775565b-z2w97", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.28.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib9c649c3bf8", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:03:52.038669 containerd[1593]: 2026-01-16 18:03:52.004 [INFO][3931] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.1/32] ContainerID="1b51617148e5947c2d3f49b6edf98d629fdb84b392259e456b1dd2db5f6796c4" Namespace="calico-system" Pod="whisker-767775565b-z2w97" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-whisker--767775565b--z2w97-eth0" Jan 16 18:03:52.038756 containerd[1593]: 2026-01-16 18:03:52.004 [INFO][3931] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib9c649c3bf8 ContainerID="1b51617148e5947c2d3f49b6edf98d629fdb84b392259e456b1dd2db5f6796c4" Namespace="calico-system" Pod="whisker-767775565b-z2w97" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-whisker--767775565b--z2w97-eth0" Jan 16 18:03:52.038756 containerd[1593]: 2026-01-16 18:03:52.016 [INFO][3931] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1b51617148e5947c2d3f49b6edf98d629fdb84b392259e456b1dd2db5f6796c4" Namespace="calico-system" Pod="whisker-767775565b-z2w97" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-whisker--767775565b--z2w97-eth0" Jan 16 18:03:52.038802 containerd[1593]: 2026-01-16 18:03:52.017 [INFO][3931] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1b51617148e5947c2d3f49b6edf98d629fdb84b392259e456b1dd2db5f6796c4" Namespace="calico-system" Pod="whisker-767775565b-z2w97" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-whisker--767775565b--z2w97-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580--0--0--p--f44e0c3b96-k8s-whisker--767775565b--z2w97-eth0", GenerateName:"whisker-767775565b-", Namespace:"calico-system", SelfLink:"", UID:"76ab8953-d1f7-498a-b509-62959852b74e", ResourceVersion:"920", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 3, 51, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"767775565b", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580-0-0-p-f44e0c3b96", ContainerID:"1b51617148e5947c2d3f49b6edf98d629fdb84b392259e456b1dd2db5f6796c4", Pod:"whisker-767775565b-z2w97", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.28.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"calib9c649c3bf8", MAC:"ce:d9:d6:7a:b9:4a", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:03:52.038848 containerd[1593]: 2026-01-16 18:03:52.033 [INFO][3931] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1b51617148e5947c2d3f49b6edf98d629fdb84b392259e456b1dd2db5f6796c4" Namespace="calico-system" Pod="whisker-767775565b-z2w97" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-whisker--767775565b--z2w97-eth0" Jan 16 18:03:52.094306 containerd[1593]: time="2026-01-16T18:03:52.094113662Z" level=info msg="connecting to shim 1b51617148e5947c2d3f49b6edf98d629fdb84b392259e456b1dd2db5f6796c4" address="unix:///run/containerd/s/899bfa0621d13a69305285b296ff0861923845b0ab15a3143cf624db9f6dde06" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:03:52.133338 systemd[1]: Started cri-containerd-1b51617148e5947c2d3f49b6edf98d629fdb84b392259e456b1dd2db5f6796c4.scope - libcontainer container 1b51617148e5947c2d3f49b6edf98d629fdb84b392259e456b1dd2db5f6796c4. Jan 16 18:03:52.160000 audit: BPF prog-id=173 op=LOAD Jan 16 18:03:52.161000 audit: BPF prog-id=174 op=LOAD Jan 16 18:03:52.161000 audit[3974]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3962 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:52.161000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162353136313731343865353934376332643366343962366564663938 Jan 16 18:03:52.161000 audit: BPF prog-id=174 op=UNLOAD Jan 16 18:03:52.161000 audit[3974]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3962 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:52.161000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162353136313731343865353934376332643366343962366564663938 Jan 16 18:03:52.162000 audit: BPF prog-id=175 op=LOAD Jan 16 18:03:52.162000 audit[3974]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3962 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:52.162000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162353136313731343865353934376332643366343962366564663938 Jan 16 18:03:52.163000 audit: BPF prog-id=176 op=LOAD Jan 16 18:03:52.163000 audit[3974]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3962 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:52.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162353136313731343865353934376332643366343962366564663938 Jan 16 18:03:52.163000 audit: BPF prog-id=176 op=UNLOAD Jan 16 18:03:52.163000 audit[3974]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3962 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:52.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162353136313731343865353934376332643366343962366564663938 Jan 16 18:03:52.163000 audit: BPF prog-id=175 op=UNLOAD Jan 16 18:03:52.163000 audit[3974]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3962 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:52.163000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162353136313731343865353934376332643366343962366564663938 Jan 16 18:03:52.164000 audit: BPF prog-id=177 op=LOAD Jan 16 18:03:52.164000 audit[3974]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3962 pid=3974 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:52.164000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3162353136313731343865353934376332643366343962366564663938 Jan 16 18:03:52.256937 containerd[1593]: time="2026-01-16T18:03:52.256876221Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-767775565b-z2w97,Uid:76ab8953-d1f7-498a-b509-62959852b74e,Namespace:calico-system,Attempt:0,} returns sandbox id \"1b51617148e5947c2d3f49b6edf98d629fdb84b392259e456b1dd2db5f6796c4\"" Jan 16 18:03:52.259933 containerd[1593]: time="2026-01-16T18:03:52.259786606Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 18:03:52.617207 containerd[1593]: time="2026-01-16T18:03:52.617163500Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:03:52.619167 containerd[1593]: time="2026-01-16T18:03:52.619118158Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 18:03:52.619578 containerd[1593]: time="2026-01-16T18:03:52.619204806Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 16 18:03:52.619630 kubelet[2847]: E0116 18:03:52.619350 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 18:03:52.619630 kubelet[2847]: E0116 18:03:52.619394 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 18:03:52.622286 kubelet[2847]: E0116 18:03:52.622212 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9acb3dbfadba421ba67d8f0d97111c75,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-27htv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-767775565b-z2w97_calico-system(76ab8953-d1f7-498a-b509-62959852b74e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 18:03:52.625249 containerd[1593]: time="2026-01-16T18:03:52.624984292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 18:03:52.987168 containerd[1593]: time="2026-01-16T18:03:52.986764826Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:03:52.991218 containerd[1593]: time="2026-01-16T18:03:52.991110141Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 18:03:52.992691 kubelet[2847]: E0116 18:03:52.992106 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 18:03:52.992691 kubelet[2847]: E0116 18:03:52.992155 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 18:03:52.993863 containerd[1593]: time="2026-01-16T18:03:52.991452293Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 16 18:03:52.993917 kubelet[2847]: E0116 18:03:52.992284 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-27htv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-767775565b-z2w97_calico-system(76ab8953-d1f7-498a-b509-62959852b74e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 18:03:52.994853 kubelet[2847]: E0116 18:03:52.994728 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-767775565b-z2w97" podUID="76ab8953-d1f7-498a-b509-62959852b74e" Jan 16 18:03:53.101964 kubelet[2847]: I0116 18:03:53.101909 2847 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8e0474a2-d39a-4579-914b-c283a50c8f89" path="/var/lib/kubelet/pods/8e0474a2-d39a-4579-914b-c283a50c8f89/volumes" Jan 16 18:03:53.358731 kubelet[2847]: E0116 18:03:53.358575 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-767775565b-z2w97" podUID="76ab8953-d1f7-498a-b509-62959852b74e" Jan 16 18:03:53.388000 audit[4117]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=4117 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:03:53.388000 audit[4117]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffca5c5410 a2=0 a3=1 items=0 ppid=2953 pid=4117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:53.388000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:03:53.394000 audit[4117]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=4117 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:03:53.394000 audit[4117]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffca5c5410 a2=0 a3=1 items=0 ppid=2953 pid=4117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:53.394000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:03:53.685108 systemd-networkd[1472]: calib9c649c3bf8: Gained IPv6LL Jan 16 18:03:54.542448 kubelet[2847]: I0116 18:03:54.542340 2847 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 16 18:03:54.578000 audit[4141]: NETFILTER_CFG table=filter:121 family=2 entries=21 op=nft_register_rule pid=4141 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:03:54.578000 audit[4141]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffeaff4310 a2=0 a3=1 items=0 ppid=2953 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:54.578000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:03:54.582000 audit[4141]: NETFILTER_CFG table=nat:122 family=2 entries=19 op=nft_register_chain pid=4141 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:03:54.582000 audit[4141]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=ffffeaff4310 a2=0 a3=1 items=0 ppid=2953 pid=4141 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:54.582000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:03:55.672000 audit: BPF prog-id=178 op=LOAD Jan 16 18:03:55.674935 kernel: kauditd_printk_skb: 39 callbacks suppressed Jan 16 18:03:55.675087 kernel: audit: type=1334 audit(1768586635.672:578): prog-id=178 op=LOAD Jan 16 18:03:55.675119 kernel: audit: type=1300 audit(1768586635.672:578): arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdfece968 a2=98 a3=ffffdfece958 items=0 ppid=4181 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.672000 audit[4197]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdfece968 a2=98 a3=ffffdfece958 items=0 ppid=4181 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.672000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 18:03:55.679480 kernel: audit: type=1327 audit(1768586635.672:578): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 18:03:55.679599 kernel: audit: type=1334 audit(1768586635.676:579): prog-id=178 op=UNLOAD Jan 16 18:03:55.676000 audit: BPF prog-id=178 op=UNLOAD Jan 16 18:03:55.676000 audit[4197]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffdfece938 a3=0 items=0 ppid=4181 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.682105 kernel: audit: type=1300 audit(1768586635.676:579): arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffdfece938 a3=0 items=0 ppid=4181 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.682246 kernel: audit: type=1327 audit(1768586635.676:579): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 18:03:55.676000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 18:03:55.684124 kernel: audit: type=1334 audit(1768586635.676:580): prog-id=179 op=LOAD Jan 16 18:03:55.676000 audit: BPF prog-id=179 op=LOAD Jan 16 18:03:55.676000 audit[4197]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdfece818 a2=74 a3=95 items=0 ppid=4181 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.686645 kernel: audit: type=1300 audit(1768586635.676:580): arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdfece818 a2=74 a3=95 items=0 ppid=4181 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.686718 kernel: audit: type=1327 audit(1768586635.676:580): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 18:03:55.676000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 18:03:55.676000 audit: BPF prog-id=179 op=UNLOAD Jan 16 18:03:55.689233 kernel: audit: type=1334 audit(1768586635.676:581): prog-id=179 op=UNLOAD Jan 16 18:03:55.676000 audit[4197]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4181 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.676000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 18:03:55.676000 audit: BPF prog-id=180 op=LOAD Jan 16 18:03:55.676000 audit[4197]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffdfece848 a2=40 a3=ffffdfece878 items=0 ppid=4181 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.676000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 18:03:55.676000 audit: BPF prog-id=180 op=UNLOAD Jan 16 18:03:55.676000 audit[4197]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffdfece878 items=0 ppid=4181 pid=4197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.676000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 18:03:55.682000 audit: BPF prog-id=181 op=LOAD Jan 16 18:03:55.682000 audit[4198]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffee301d18 a2=98 a3=ffffee301d08 items=0 ppid=4181 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.682000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:03:55.683000 audit: BPF prog-id=181 op=UNLOAD Jan 16 18:03:55.683000 audit[4198]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffee301ce8 a3=0 items=0 ppid=4181 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.683000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:03:55.683000 audit: BPF prog-id=182 op=LOAD Jan 16 18:03:55.683000 audit[4198]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffee3019a8 a2=74 a3=95 items=0 ppid=4181 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.683000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:03:55.687000 audit: BPF prog-id=182 op=UNLOAD Jan 16 18:03:55.687000 audit[4198]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4181 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.687000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:03:55.687000 audit: BPF prog-id=183 op=LOAD Jan 16 18:03:55.687000 audit[4198]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffee301a08 a2=94 a3=2 items=0 ppid=4181 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.687000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:03:55.688000 audit: BPF prog-id=183 op=UNLOAD Jan 16 18:03:55.688000 audit[4198]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4181 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.688000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:03:55.791000 audit: BPF prog-id=184 op=LOAD Jan 16 18:03:55.791000 audit[4198]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffee3019c8 a2=40 a3=ffffee3019f8 items=0 ppid=4181 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.791000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:03:55.792000 audit: BPF prog-id=184 op=UNLOAD Jan 16 18:03:55.792000 audit[4198]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffee3019f8 items=0 ppid=4181 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.792000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:03:55.802000 audit: BPF prog-id=185 op=LOAD Jan 16 18:03:55.802000 audit[4198]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffee3019d8 a2=94 a3=4 items=0 ppid=4181 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.802000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:03:55.802000 audit: BPF prog-id=185 op=UNLOAD Jan 16 18:03:55.802000 audit[4198]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4181 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.802000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:03:55.803000 audit: BPF prog-id=186 op=LOAD Jan 16 18:03:55.803000 audit[4198]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffee301818 a2=94 a3=5 items=0 ppid=4181 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.803000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:03:55.803000 audit: BPF prog-id=186 op=UNLOAD Jan 16 18:03:55.803000 audit[4198]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4181 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.803000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:03:55.803000 audit: BPF prog-id=187 op=LOAD Jan 16 18:03:55.803000 audit[4198]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffee301a48 a2=94 a3=6 items=0 ppid=4181 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.803000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:03:55.803000 audit: BPF prog-id=187 op=UNLOAD Jan 16 18:03:55.803000 audit[4198]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4181 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.803000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:03:55.804000 audit: BPF prog-id=188 op=LOAD Jan 16 18:03:55.804000 audit[4198]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffee301218 a2=94 a3=83 items=0 ppid=4181 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.804000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:03:55.804000 audit: BPF prog-id=189 op=LOAD Jan 16 18:03:55.804000 audit[4198]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffee300fd8 a2=94 a3=2 items=0 ppid=4181 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.804000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:03:55.804000 audit: BPF prog-id=189 op=UNLOAD Jan 16 18:03:55.804000 audit[4198]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4181 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.804000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:03:55.805000 audit: BPF prog-id=188 op=UNLOAD Jan 16 18:03:55.805000 audit[4198]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=2a52f620 a3=2a522b00 items=0 ppid=4181 pid=4198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.805000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:03:55.817000 audit: BPF prog-id=190 op=LOAD Jan 16 18:03:55.817000 audit[4201]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcf565098 a2=98 a3=ffffcf565088 items=0 ppid=4181 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.817000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 18:03:55.817000 audit: BPF prog-id=190 op=UNLOAD Jan 16 18:03:55.817000 audit[4201]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffcf565068 a3=0 items=0 ppid=4181 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.817000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 18:03:55.817000 audit: BPF prog-id=191 op=LOAD Jan 16 18:03:55.817000 audit[4201]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcf564f48 a2=74 a3=95 items=0 ppid=4181 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.817000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 18:03:55.817000 audit: BPF prog-id=191 op=UNLOAD Jan 16 18:03:55.817000 audit[4201]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4181 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.817000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 18:03:55.817000 audit: BPF prog-id=192 op=LOAD Jan 16 18:03:55.817000 audit[4201]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcf564f78 a2=40 a3=ffffcf564fa8 items=0 ppid=4181 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.817000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 18:03:55.817000 audit: BPF prog-id=192 op=UNLOAD Jan 16 18:03:55.817000 audit[4201]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffcf564fa8 items=0 ppid=4181 pid=4201 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.817000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 18:03:55.894496 systemd-networkd[1472]: vxlan.calico: Link UP Jan 16 18:03:55.894503 systemd-networkd[1472]: vxlan.calico: Gained carrier Jan 16 18:03:55.915000 audit: BPF prog-id=193 op=LOAD Jan 16 18:03:55.915000 audit[4228]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffef0262f8 a2=98 a3=ffffef0262e8 items=0 ppid=4181 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.915000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:03:55.915000 audit: BPF prog-id=193 op=UNLOAD Jan 16 18:03:55.915000 audit[4228]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffef0262c8 a3=0 items=0 ppid=4181 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.915000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:03:55.915000 audit: BPF prog-id=194 op=LOAD Jan 16 18:03:55.915000 audit[4228]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffef025fd8 a2=74 a3=95 items=0 ppid=4181 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.915000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:03:55.915000 audit: BPF prog-id=194 op=UNLOAD Jan 16 18:03:55.915000 audit[4228]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=4181 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.915000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:03:55.915000 audit: BPF prog-id=195 op=LOAD Jan 16 18:03:55.915000 audit[4228]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffef026038 a2=94 a3=2 items=0 ppid=4181 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.915000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:03:55.915000 audit: BPF prog-id=195 op=UNLOAD Jan 16 18:03:55.915000 audit[4228]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=4181 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.915000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:03:55.915000 audit: BPF prog-id=196 op=LOAD Jan 16 18:03:55.915000 audit[4228]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffef025eb8 a2=40 a3=ffffef025ee8 items=0 ppid=4181 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.915000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:03:55.915000 audit: BPF prog-id=196 op=UNLOAD Jan 16 18:03:55.915000 audit[4228]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffef025ee8 items=0 ppid=4181 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.915000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:03:55.915000 audit: BPF prog-id=197 op=LOAD Jan 16 18:03:55.915000 audit[4228]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffef026008 a2=94 a3=b7 items=0 ppid=4181 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.915000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:03:55.915000 audit: BPF prog-id=197 op=UNLOAD Jan 16 18:03:55.915000 audit[4228]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=4181 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.915000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:03:55.917000 audit: BPF prog-id=198 op=LOAD Jan 16 18:03:55.917000 audit[4228]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffef0256b8 a2=94 a3=2 items=0 ppid=4181 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.917000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:03:55.917000 audit: BPF prog-id=198 op=UNLOAD Jan 16 18:03:55.917000 audit[4228]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=4181 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.917000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:03:55.917000 audit: BPF prog-id=199 op=LOAD Jan 16 18:03:55.917000 audit[4228]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffef025848 a2=94 a3=30 items=0 ppid=4181 pid=4228 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.917000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:03:55.921000 audit: BPF prog-id=200 op=LOAD Jan 16 18:03:55.921000 audit[4231]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffd655af08 a2=98 a3=ffffd655aef8 items=0 ppid=4181 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.921000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:03:55.921000 audit: BPF prog-id=200 op=UNLOAD Jan 16 18:03:55.921000 audit[4231]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffd655aed8 a3=0 items=0 ppid=4181 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.921000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:03:55.921000 audit: BPF prog-id=201 op=LOAD Jan 16 18:03:55.921000 audit[4231]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd655ab98 a2=74 a3=95 items=0 ppid=4181 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.921000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:03:55.921000 audit: BPF prog-id=201 op=UNLOAD Jan 16 18:03:55.921000 audit[4231]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=4181 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.921000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:03:55.921000 audit: BPF prog-id=202 op=LOAD Jan 16 18:03:55.921000 audit[4231]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd655abf8 a2=94 a3=2 items=0 ppid=4181 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.921000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:03:55.921000 audit: BPF prog-id=202 op=UNLOAD Jan 16 18:03:55.921000 audit[4231]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=4181 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:55.921000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:03:56.050000 audit: BPF prog-id=203 op=LOAD Jan 16 18:03:56.050000 audit[4231]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffd655abb8 a2=40 a3=ffffd655abe8 items=0 ppid=4181 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:56.050000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:03:56.050000 audit: BPF prog-id=203 op=UNLOAD Jan 16 18:03:56.050000 audit[4231]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffd655abe8 items=0 ppid=4181 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:56.050000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:03:56.061000 audit: BPF prog-id=204 op=LOAD Jan 16 18:03:56.061000 audit[4231]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd655abc8 a2=94 a3=4 items=0 ppid=4181 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:56.061000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:03:56.061000 audit: BPF prog-id=204 op=UNLOAD Jan 16 18:03:56.061000 audit[4231]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=4181 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:56.061000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:03:56.061000 audit: BPF prog-id=205 op=LOAD Jan 16 18:03:56.061000 audit[4231]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffd655aa08 a2=94 a3=5 items=0 ppid=4181 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:56.061000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:03:56.061000 audit: BPF prog-id=205 op=UNLOAD Jan 16 18:03:56.061000 audit[4231]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=4181 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:56.061000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:03:56.061000 audit: BPF prog-id=206 op=LOAD Jan 16 18:03:56.061000 audit[4231]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd655ac38 a2=94 a3=6 items=0 ppid=4181 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:56.061000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:03:56.061000 audit: BPF prog-id=206 op=UNLOAD Jan 16 18:03:56.061000 audit[4231]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=4181 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:56.061000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:03:56.062000 audit: BPF prog-id=207 op=LOAD Jan 16 18:03:56.062000 audit[4231]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffd655a408 a2=94 a3=83 items=0 ppid=4181 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:56.062000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:03:56.062000 audit: BPF prog-id=208 op=LOAD Jan 16 18:03:56.062000 audit[4231]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffd655a1c8 a2=94 a3=2 items=0 ppid=4181 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:56.062000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:03:56.062000 audit: BPF prog-id=208 op=UNLOAD Jan 16 18:03:56.062000 audit[4231]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=4181 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:56.062000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:03:56.063000 audit: BPF prog-id=207 op=UNLOAD Jan 16 18:03:56.063000 audit[4231]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=fa21620 a3=fa14b00 items=0 ppid=4181 pid=4231 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:56.063000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:03:56.068000 audit: BPF prog-id=199 op=UNLOAD Jan 16 18:03:56.068000 audit[4181]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4000efd980 a2=0 a3=0 items=0 ppid=3995 pid=4181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:56.068000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 16 18:03:56.098494 containerd[1593]: time="2026-01-16T18:03:56.098446285Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7995d6d9cc-4t5v5,Uid:803aa494-41ae-4e2d-9b86-fe41697d291d,Namespace:calico-system,Attempt:0,}" Jan 16 18:03:56.099170 containerd[1593]: time="2026-01-16T18:03:56.098869640Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-795d7bd556-lxhzl,Uid:acf5809a-b3e8-41d0-86b6-edf701abdda5,Namespace:calico-apiserver,Attempt:0,}" Jan 16 18:03:56.175000 audit[4275]: NETFILTER_CFG table=nat:123 family=2 entries=15 op=nft_register_chain pid=4275 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:03:56.175000 audit[4275]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffe6b807a0 a2=0 a3=ffffa03c7fa8 items=0 ppid=4181 pid=4275 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:56.175000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:03:56.176000 audit[4274]: NETFILTER_CFG table=raw:124 family=2 entries=21 op=nft_register_chain pid=4274 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:03:56.176000 audit[4274]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffe27478c0 a2=0 a3=ffffa0515fa8 items=0 ppid=4181 pid=4274 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:56.176000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:03:56.184000 audit[4282]: NETFILTER_CFG table=mangle:125 family=2 entries=16 op=nft_register_chain pid=4282 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:03:56.184000 audit[4282]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffe3794d40 a2=0 a3=ffffa102afa8 items=0 ppid=4181 pid=4282 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:56.184000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:03:56.197000 audit[4276]: NETFILTER_CFG table=filter:126 family=2 entries=94 op=nft_register_chain pid=4276 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:03:56.197000 audit[4276]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffe72f63e0 a2=0 a3=ffff9d991fa8 items=0 ppid=4181 pid=4276 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:56.197000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:03:56.350866 systemd-networkd[1472]: calib560d1e15f7: Link UP Jan 16 18:03:56.353315 systemd-networkd[1472]: calib560d1e15f7: Gained carrier Jan 16 18:03:56.372123 containerd[1593]: 2026-01-16 18:03:56.214 [INFO][4252] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4580--0--0--p--f44e0c3b96-k8s-calico--kube--controllers--7995d6d9cc--4t5v5-eth0 calico-kube-controllers-7995d6d9cc- calico-system 803aa494-41ae-4e2d-9b86-fe41697d291d 855 0 2026-01-16 18:03:37 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:7995d6d9cc projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4580-0-0-p-f44e0c3b96 calico-kube-controllers-7995d6d9cc-4t5v5 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calib560d1e15f7 [] [] }} ContainerID="ebc7106e61170589f25d4d16509b04164596262cf95fe14eeca11994e034723a" Namespace="calico-system" Pod="calico-kube-controllers-7995d6d9cc-4t5v5" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-calico--kube--controllers--7995d6d9cc--4t5v5-" Jan 16 18:03:56.372123 containerd[1593]: 2026-01-16 18:03:56.216 [INFO][4252] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="ebc7106e61170589f25d4d16509b04164596262cf95fe14eeca11994e034723a" Namespace="calico-system" Pod="calico-kube-controllers-7995d6d9cc-4t5v5" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-calico--kube--controllers--7995d6d9cc--4t5v5-eth0" Jan 16 18:03:56.372123 containerd[1593]: 2026-01-16 18:03:56.271 [INFO][4293] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="ebc7106e61170589f25d4d16509b04164596262cf95fe14eeca11994e034723a" HandleID="k8s-pod-network.ebc7106e61170589f25d4d16509b04164596262cf95fe14eeca11994e034723a" Workload="ci--4580--0--0--p--f44e0c3b96-k8s-calico--kube--controllers--7995d6d9cc--4t5v5-eth0" Jan 16 18:03:56.372375 containerd[1593]: 2026-01-16 18:03:56.272 [INFO][4293] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="ebc7106e61170589f25d4d16509b04164596262cf95fe14eeca11994e034723a" HandleID="k8s-pod-network.ebc7106e61170589f25d4d16509b04164596262cf95fe14eeca11994e034723a" Workload="ci--4580--0--0--p--f44e0c3b96-k8s-calico--kube--controllers--7995d6d9cc--4t5v5-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000103800), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4580-0-0-p-f44e0c3b96", "pod":"calico-kube-controllers-7995d6d9cc-4t5v5", "timestamp":"2026-01-16 18:03:56.271750249 +0000 UTC"}, Hostname:"ci-4580-0-0-p-f44e0c3b96", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 18:03:56.372375 containerd[1593]: 2026-01-16 18:03:56.272 [INFO][4293] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 18:03:56.372375 containerd[1593]: 2026-01-16 18:03:56.272 [INFO][4293] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 18:03:56.372375 containerd[1593]: 2026-01-16 18:03:56.272 [INFO][4293] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4580-0-0-p-f44e0c3b96' Jan 16 18:03:56.372375 containerd[1593]: 2026-01-16 18:03:56.284 [INFO][4293] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.ebc7106e61170589f25d4d16509b04164596262cf95fe14eeca11994e034723a" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:56.372375 containerd[1593]: 2026-01-16 18:03:56.299 [INFO][4293] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:56.372375 containerd[1593]: 2026-01-16 18:03:56.306 [INFO][4293] ipam/ipam.go 511: Trying affinity for 192.168.28.0/26 host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:56.372375 containerd[1593]: 2026-01-16 18:03:56.310 [INFO][4293] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.0/26 host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:56.372375 containerd[1593]: 2026-01-16 18:03:56.314 [INFO][4293] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.0/26 host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:56.372584 containerd[1593]: 2026-01-16 18:03:56.315 [INFO][4293] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.28.0/26 handle="k8s-pod-network.ebc7106e61170589f25d4d16509b04164596262cf95fe14eeca11994e034723a" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:56.372584 containerd[1593]: 2026-01-16 18:03:56.319 [INFO][4293] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.ebc7106e61170589f25d4d16509b04164596262cf95fe14eeca11994e034723a Jan 16 18:03:56.372584 containerd[1593]: 2026-01-16 18:03:56.325 [INFO][4293] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.28.0/26 handle="k8s-pod-network.ebc7106e61170589f25d4d16509b04164596262cf95fe14eeca11994e034723a" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:56.372584 containerd[1593]: 2026-01-16 18:03:56.333 [INFO][4293] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.28.2/26] block=192.168.28.0/26 handle="k8s-pod-network.ebc7106e61170589f25d4d16509b04164596262cf95fe14eeca11994e034723a" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:56.372584 containerd[1593]: 2026-01-16 18:03:56.333 [INFO][4293] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.2/26] handle="k8s-pod-network.ebc7106e61170589f25d4d16509b04164596262cf95fe14eeca11994e034723a" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:56.372584 containerd[1593]: 2026-01-16 18:03:56.333 [INFO][4293] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 18:03:56.372584 containerd[1593]: 2026-01-16 18:03:56.334 [INFO][4293] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.28.2/26] IPv6=[] ContainerID="ebc7106e61170589f25d4d16509b04164596262cf95fe14eeca11994e034723a" HandleID="k8s-pod-network.ebc7106e61170589f25d4d16509b04164596262cf95fe14eeca11994e034723a" Workload="ci--4580--0--0--p--f44e0c3b96-k8s-calico--kube--controllers--7995d6d9cc--4t5v5-eth0" Jan 16 18:03:56.372720 containerd[1593]: 2026-01-16 18:03:56.336 [INFO][4252] cni-plugin/k8s.go 418: Populated endpoint ContainerID="ebc7106e61170589f25d4d16509b04164596262cf95fe14eeca11994e034723a" Namespace="calico-system" Pod="calico-kube-controllers-7995d6d9cc-4t5v5" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-calico--kube--controllers--7995d6d9cc--4t5v5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580--0--0--p--f44e0c3b96-k8s-calico--kube--controllers--7995d6d9cc--4t5v5-eth0", GenerateName:"calico-kube-controllers-7995d6d9cc-", Namespace:"calico-system", SelfLink:"", UID:"803aa494-41ae-4e2d-9b86-fe41697d291d", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 3, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7995d6d9cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580-0-0-p-f44e0c3b96", ContainerID:"", Pod:"calico-kube-controllers-7995d6d9cc-4t5v5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.28.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib560d1e15f7", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:03:56.372773 containerd[1593]: 2026-01-16 18:03:56.336 [INFO][4252] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.2/32] ContainerID="ebc7106e61170589f25d4d16509b04164596262cf95fe14eeca11994e034723a" Namespace="calico-system" Pod="calico-kube-controllers-7995d6d9cc-4t5v5" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-calico--kube--controllers--7995d6d9cc--4t5v5-eth0" Jan 16 18:03:56.372773 containerd[1593]: 2026-01-16 18:03:56.336 [INFO][4252] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calib560d1e15f7 ContainerID="ebc7106e61170589f25d4d16509b04164596262cf95fe14eeca11994e034723a" Namespace="calico-system" Pod="calico-kube-controllers-7995d6d9cc-4t5v5" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-calico--kube--controllers--7995d6d9cc--4t5v5-eth0" Jan 16 18:03:56.372773 containerd[1593]: 2026-01-16 18:03:56.354 [INFO][4252] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="ebc7106e61170589f25d4d16509b04164596262cf95fe14eeca11994e034723a" Namespace="calico-system" Pod="calico-kube-controllers-7995d6d9cc-4t5v5" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-calico--kube--controllers--7995d6d9cc--4t5v5-eth0" Jan 16 18:03:56.372833 containerd[1593]: 2026-01-16 18:03:56.355 [INFO][4252] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="ebc7106e61170589f25d4d16509b04164596262cf95fe14eeca11994e034723a" Namespace="calico-system" Pod="calico-kube-controllers-7995d6d9cc-4t5v5" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-calico--kube--controllers--7995d6d9cc--4t5v5-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580--0--0--p--f44e0c3b96-k8s-calico--kube--controllers--7995d6d9cc--4t5v5-eth0", GenerateName:"calico-kube-controllers-7995d6d9cc-", Namespace:"calico-system", SelfLink:"", UID:"803aa494-41ae-4e2d-9b86-fe41697d291d", ResourceVersion:"855", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 3, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"7995d6d9cc", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580-0-0-p-f44e0c3b96", ContainerID:"ebc7106e61170589f25d4d16509b04164596262cf95fe14eeca11994e034723a", Pod:"calico-kube-controllers-7995d6d9cc-4t5v5", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.28.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calib560d1e15f7", MAC:"86:28:fd:d8:98:1b", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:03:56.372880 containerd[1593]: 2026-01-16 18:03:56.367 [INFO][4252] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="ebc7106e61170589f25d4d16509b04164596262cf95fe14eeca11994e034723a" Namespace="calico-system" Pod="calico-kube-controllers-7995d6d9cc-4t5v5" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-calico--kube--controllers--7995d6d9cc--4t5v5-eth0" Jan 16 18:03:56.408000 audit[4318]: NETFILTER_CFG table=filter:127 family=2 entries=36 op=nft_register_chain pid=4318 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:03:56.408000 audit[4318]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19576 a0=3 a1=ffffc12d6230 a2=0 a3=ffffa18defa8 items=0 ppid=4181 pid=4318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:56.408000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:03:56.416493 containerd[1593]: time="2026-01-16T18:03:56.416081779Z" level=info msg="connecting to shim ebc7106e61170589f25d4d16509b04164596262cf95fe14eeca11994e034723a" address="unix:///run/containerd/s/a06ea8cc9369d46d20e9873aab31fabf1704a3fa3be04fcde6e991b3c291c785" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:03:56.464262 systemd[1]: Started cri-containerd-ebc7106e61170589f25d4d16509b04164596262cf95fe14eeca11994e034723a.scope - libcontainer container ebc7106e61170589f25d4d16509b04164596262cf95fe14eeca11994e034723a. Jan 16 18:03:56.471002 systemd-networkd[1472]: cali1339cc304f4: Link UP Jan 16 18:03:56.476155 systemd-networkd[1472]: cali1339cc304f4: Gained carrier Jan 16 18:03:56.496591 containerd[1593]: 2026-01-16 18:03:56.215 [INFO][4250] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4580--0--0--p--f44e0c3b96-k8s-calico--apiserver--795d7bd556--lxhzl-eth0 calico-apiserver-795d7bd556- calico-apiserver acf5809a-b3e8-41d0-86b6-edf701abdda5 852 0 2026-01-16 18:03:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:795d7bd556 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4580-0-0-p-f44e0c3b96 calico-apiserver-795d7bd556-lxhzl eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali1339cc304f4 [] [] }} ContainerID="2b733b9155f24646d1e00456c51b3b559401ceda0e68a6bb5da7a6a9b0539ab3" Namespace="calico-apiserver" Pod="calico-apiserver-795d7bd556-lxhzl" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-calico--apiserver--795d7bd556--lxhzl-" Jan 16 18:03:56.496591 containerd[1593]: 2026-01-16 18:03:56.215 [INFO][4250] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2b733b9155f24646d1e00456c51b3b559401ceda0e68a6bb5da7a6a9b0539ab3" Namespace="calico-apiserver" Pod="calico-apiserver-795d7bd556-lxhzl" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-calico--apiserver--795d7bd556--lxhzl-eth0" Jan 16 18:03:56.496591 containerd[1593]: 2026-01-16 18:03:56.271 [INFO][4295] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2b733b9155f24646d1e00456c51b3b559401ceda0e68a6bb5da7a6a9b0539ab3" HandleID="k8s-pod-network.2b733b9155f24646d1e00456c51b3b559401ceda0e68a6bb5da7a6a9b0539ab3" Workload="ci--4580--0--0--p--f44e0c3b96-k8s-calico--apiserver--795d7bd556--lxhzl-eth0" Jan 16 18:03:56.497049 containerd[1593]: 2026-01-16 18:03:56.272 [INFO][4295] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2b733b9155f24646d1e00456c51b3b559401ceda0e68a6bb5da7a6a9b0539ab3" HandleID="k8s-pod-network.2b733b9155f24646d1e00456c51b3b559401ceda0e68a6bb5da7a6a9b0539ab3" Workload="ci--4580--0--0--p--f44e0c3b96-k8s-calico--apiserver--795d7bd556--lxhzl-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d960), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4580-0-0-p-f44e0c3b96", "pod":"calico-apiserver-795d7bd556-lxhzl", "timestamp":"2026-01-16 18:03:56.271749409 +0000 UTC"}, Hostname:"ci-4580-0-0-p-f44e0c3b96", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 18:03:56.497049 containerd[1593]: 2026-01-16 18:03:56.272 [INFO][4295] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 18:03:56.497049 containerd[1593]: 2026-01-16 18:03:56.333 [INFO][4295] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 18:03:56.497049 containerd[1593]: 2026-01-16 18:03:56.333 [INFO][4295] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4580-0-0-p-f44e0c3b96' Jan 16 18:03:56.497049 containerd[1593]: 2026-01-16 18:03:56.385 [INFO][4295] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2b733b9155f24646d1e00456c51b3b559401ceda0e68a6bb5da7a6a9b0539ab3" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:56.497049 containerd[1593]: 2026-01-16 18:03:56.403 [INFO][4295] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:56.497049 containerd[1593]: 2026-01-16 18:03:56.412 [INFO][4295] ipam/ipam.go 511: Trying affinity for 192.168.28.0/26 host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:56.497049 containerd[1593]: 2026-01-16 18:03:56.417 [INFO][4295] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.0/26 host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:56.497049 containerd[1593]: 2026-01-16 18:03:56.425 [INFO][4295] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.0/26 host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:56.497278 containerd[1593]: 2026-01-16 18:03:56.425 [INFO][4295] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.28.0/26 handle="k8s-pod-network.2b733b9155f24646d1e00456c51b3b559401ceda0e68a6bb5da7a6a9b0539ab3" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:56.497278 containerd[1593]: 2026-01-16 18:03:56.428 [INFO][4295] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2b733b9155f24646d1e00456c51b3b559401ceda0e68a6bb5da7a6a9b0539ab3 Jan 16 18:03:56.497278 containerd[1593]: 2026-01-16 18:03:56.438 [INFO][4295] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.28.0/26 handle="k8s-pod-network.2b733b9155f24646d1e00456c51b3b559401ceda0e68a6bb5da7a6a9b0539ab3" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:56.497278 containerd[1593]: 2026-01-16 18:03:56.448 [INFO][4295] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.28.3/26] block=192.168.28.0/26 handle="k8s-pod-network.2b733b9155f24646d1e00456c51b3b559401ceda0e68a6bb5da7a6a9b0539ab3" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:56.497278 containerd[1593]: 2026-01-16 18:03:56.448 [INFO][4295] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.3/26] handle="k8s-pod-network.2b733b9155f24646d1e00456c51b3b559401ceda0e68a6bb5da7a6a9b0539ab3" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:56.497278 containerd[1593]: 2026-01-16 18:03:56.449 [INFO][4295] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 18:03:56.497278 containerd[1593]: 2026-01-16 18:03:56.449 [INFO][4295] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.28.3/26] IPv6=[] ContainerID="2b733b9155f24646d1e00456c51b3b559401ceda0e68a6bb5da7a6a9b0539ab3" HandleID="k8s-pod-network.2b733b9155f24646d1e00456c51b3b559401ceda0e68a6bb5da7a6a9b0539ab3" Workload="ci--4580--0--0--p--f44e0c3b96-k8s-calico--apiserver--795d7bd556--lxhzl-eth0" Jan 16 18:03:56.497418 containerd[1593]: 2026-01-16 18:03:56.454 [INFO][4250] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2b733b9155f24646d1e00456c51b3b559401ceda0e68a6bb5da7a6a9b0539ab3" Namespace="calico-apiserver" Pod="calico-apiserver-795d7bd556-lxhzl" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-calico--apiserver--795d7bd556--lxhzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580--0--0--p--f44e0c3b96-k8s-calico--apiserver--795d7bd556--lxhzl-eth0", GenerateName:"calico-apiserver-795d7bd556-", Namespace:"calico-apiserver", SelfLink:"", UID:"acf5809a-b3e8-41d0-86b6-edf701abdda5", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 3, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"795d7bd556", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580-0-0-p-f44e0c3b96", ContainerID:"", Pod:"calico-apiserver-795d7bd556-lxhzl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1339cc304f4", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:03:56.497469 containerd[1593]: 2026-01-16 18:03:56.454 [INFO][4250] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.3/32] ContainerID="2b733b9155f24646d1e00456c51b3b559401ceda0e68a6bb5da7a6a9b0539ab3" Namespace="calico-apiserver" Pod="calico-apiserver-795d7bd556-lxhzl" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-calico--apiserver--795d7bd556--lxhzl-eth0" Jan 16 18:03:56.497469 containerd[1593]: 2026-01-16 18:03:56.454 [INFO][4250] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali1339cc304f4 ContainerID="2b733b9155f24646d1e00456c51b3b559401ceda0e68a6bb5da7a6a9b0539ab3" Namespace="calico-apiserver" Pod="calico-apiserver-795d7bd556-lxhzl" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-calico--apiserver--795d7bd556--lxhzl-eth0" Jan 16 18:03:56.497469 containerd[1593]: 2026-01-16 18:03:56.478 [INFO][4250] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2b733b9155f24646d1e00456c51b3b559401ceda0e68a6bb5da7a6a9b0539ab3" Namespace="calico-apiserver" Pod="calico-apiserver-795d7bd556-lxhzl" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-calico--apiserver--795d7bd556--lxhzl-eth0" Jan 16 18:03:56.497530 containerd[1593]: 2026-01-16 18:03:56.479 [INFO][4250] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2b733b9155f24646d1e00456c51b3b559401ceda0e68a6bb5da7a6a9b0539ab3" Namespace="calico-apiserver" Pod="calico-apiserver-795d7bd556-lxhzl" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-calico--apiserver--795d7bd556--lxhzl-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580--0--0--p--f44e0c3b96-k8s-calico--apiserver--795d7bd556--lxhzl-eth0", GenerateName:"calico-apiserver-795d7bd556-", Namespace:"calico-apiserver", SelfLink:"", UID:"acf5809a-b3e8-41d0-86b6-edf701abdda5", ResourceVersion:"852", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 3, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"795d7bd556", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580-0-0-p-f44e0c3b96", ContainerID:"2b733b9155f24646d1e00456c51b3b559401ceda0e68a6bb5da7a6a9b0539ab3", Pod:"calico-apiserver-795d7bd556-lxhzl", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali1339cc304f4", MAC:"12:79:73:34:fe:76", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:03:56.497577 containerd[1593]: 2026-01-16 18:03:56.492 [INFO][4250] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2b733b9155f24646d1e00456c51b3b559401ceda0e68a6bb5da7a6a9b0539ab3" Namespace="calico-apiserver" Pod="calico-apiserver-795d7bd556-lxhzl" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-calico--apiserver--795d7bd556--lxhzl-eth0" Jan 16 18:03:56.510000 audit: BPF prog-id=209 op=LOAD Jan 16 18:03:56.511000 audit: BPF prog-id=210 op=LOAD Jan 16 18:03:56.511000 audit[4338]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4327 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:56.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562633731303665363131373035383966323564346431363530396230 Jan 16 18:03:56.511000 audit: BPF prog-id=210 op=UNLOAD Jan 16 18:03:56.511000 audit[4338]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4327 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:56.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562633731303665363131373035383966323564346431363530396230 Jan 16 18:03:56.511000 audit: BPF prog-id=211 op=LOAD Jan 16 18:03:56.511000 audit[4338]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4327 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:56.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562633731303665363131373035383966323564346431363530396230 Jan 16 18:03:56.511000 audit: BPF prog-id=212 op=LOAD Jan 16 18:03:56.511000 audit[4338]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4327 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:56.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562633731303665363131373035383966323564346431363530396230 Jan 16 18:03:56.511000 audit: BPF prog-id=212 op=UNLOAD Jan 16 18:03:56.511000 audit[4338]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4327 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:56.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562633731303665363131373035383966323564346431363530396230 Jan 16 18:03:56.511000 audit: BPF prog-id=211 op=UNLOAD Jan 16 18:03:56.511000 audit[4338]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4327 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:56.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562633731303665363131373035383966323564346431363530396230 Jan 16 18:03:56.511000 audit: BPF prog-id=213 op=LOAD Jan 16 18:03:56.511000 audit[4338]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4327 pid=4338 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:56.511000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6562633731303665363131373035383966323564346431363530396230 Jan 16 18:03:56.546641 containerd[1593]: time="2026-01-16T18:03:56.546009899Z" level=info msg="connecting to shim 2b733b9155f24646d1e00456c51b3b559401ceda0e68a6bb5da7a6a9b0539ab3" address="unix:///run/containerd/s/dc732c00ee0e92f01f204a9190bb6e9f78f6b2bbbcccb2921bcbe812d5a8504a" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:03:56.556753 containerd[1593]: time="2026-01-16T18:03:56.556702117Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-7995d6d9cc-4t5v5,Uid:803aa494-41ae-4e2d-9b86-fe41697d291d,Namespace:calico-system,Attempt:0,} returns sandbox id \"ebc7106e61170589f25d4d16509b04164596262cf95fe14eeca11994e034723a\"" Jan 16 18:03:56.559773 containerd[1593]: time="2026-01-16T18:03:56.559737452Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 18:03:56.589407 systemd[1]: Started cri-containerd-2b733b9155f24646d1e00456c51b3b559401ceda0e68a6bb5da7a6a9b0539ab3.scope - libcontainer container 2b733b9155f24646d1e00456c51b3b559401ceda0e68a6bb5da7a6a9b0539ab3. Jan 16 18:03:56.600000 audit[4405]: NETFILTER_CFG table=filter:128 family=2 entries=60 op=nft_register_chain pid=4405 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:03:56.600000 audit[4405]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=32248 a0=3 a1=ffffd15f9c80 a2=0 a3=ffffa6c79fa8 items=0 ppid=4181 pid=4405 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:56.600000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:03:56.620000 audit: BPF prog-id=214 op=LOAD Jan 16 18:03:56.621000 audit: BPF prog-id=215 op=LOAD Jan 16 18:03:56.621000 audit[4391]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4375 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:56.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262373333623931353566323436343664316530303435366335316233 Jan 16 18:03:56.621000 audit: BPF prog-id=215 op=UNLOAD Jan 16 18:03:56.621000 audit[4391]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4375 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:56.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262373333623931353566323436343664316530303435366335316233 Jan 16 18:03:56.621000 audit: BPF prog-id=216 op=LOAD Jan 16 18:03:56.621000 audit[4391]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4375 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:56.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262373333623931353566323436343664316530303435366335316233 Jan 16 18:03:56.621000 audit: BPF prog-id=217 op=LOAD Jan 16 18:03:56.621000 audit[4391]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4375 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:56.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262373333623931353566323436343664316530303435366335316233 Jan 16 18:03:56.621000 audit: BPF prog-id=217 op=UNLOAD Jan 16 18:03:56.621000 audit[4391]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4375 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:56.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262373333623931353566323436343664316530303435366335316233 Jan 16 18:03:56.621000 audit: BPF prog-id=216 op=UNLOAD Jan 16 18:03:56.621000 audit[4391]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4375 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:56.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262373333623931353566323436343664316530303435366335316233 Jan 16 18:03:56.621000 audit: BPF prog-id=218 op=LOAD Jan 16 18:03:56.621000 audit[4391]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=4375 pid=4391 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:56.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3262373333623931353566323436343664316530303435366335316233 Jan 16 18:03:56.677645 containerd[1593]: time="2026-01-16T18:03:56.677486188Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-795d7bd556-lxhzl,Uid:acf5809a-b3e8-41d0-86b6-edf701abdda5,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"2b733b9155f24646d1e00456c51b3b559401ceda0e68a6bb5da7a6a9b0539ab3\"" Jan 16 18:03:56.906721 containerd[1593]: time="2026-01-16T18:03:56.906413507Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:03:56.909593 containerd[1593]: time="2026-01-16T18:03:56.909429521Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 18:03:56.909593 containerd[1593]: time="2026-01-16T18:03:56.909542970Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 16 18:03:56.910279 kubelet[2847]: E0116 18:03:56.910173 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 18:03:56.910279 kubelet[2847]: E0116 18:03:56.910239 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 18:03:56.912456 kubelet[2847]: E0116 18:03:56.910675 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ft29j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7995d6d9cc-4t5v5_calico-system(803aa494-41ae-4e2d-9b86-fe41697d291d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 18:03:56.912456 kubelet[2847]: E0116 18:03:56.911875 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7995d6d9cc-4t5v5" podUID="803aa494-41ae-4e2d-9b86-fe41697d291d" Jan 16 18:03:56.912799 containerd[1593]: time="2026-01-16T18:03:56.911271916Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 18:03:57.100996 containerd[1593]: time="2026-01-16T18:03:57.100761655Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-26vdn,Uid:199a61b9-0789-47e3-ae0e-1db7815072b1,Namespace:kube-system,Attempt:0,}" Jan 16 18:03:57.254363 containerd[1593]: time="2026-01-16T18:03:57.254315255Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:03:57.257624 containerd[1593]: time="2026-01-16T18:03:57.257564364Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 18:03:57.257930 containerd[1593]: time="2026-01-16T18:03:57.257614208Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 18:03:57.259171 kubelet[2847]: E0116 18:03:57.258033 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:03:57.259171 kubelet[2847]: E0116 18:03:57.258076 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:03:57.259171 kubelet[2847]: E0116 18:03:57.258234 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5x7th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-795d7bd556-lxhzl_calico-apiserver(acf5809a-b3e8-41d0-86b6-edf701abdda5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 18:03:57.259440 kubelet[2847]: E0116 18:03:57.259386 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-lxhzl" podUID="acf5809a-b3e8-41d0-86b6-edf701abdda5" Jan 16 18:03:57.333355 systemd-networkd[1472]: caliee2897634c4: Link UP Jan 16 18:03:57.336407 systemd-networkd[1472]: caliee2897634c4: Gained carrier Jan 16 18:03:57.359769 containerd[1593]: 2026-01-16 18:03:57.175 [INFO][4420] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4580--0--0--p--f44e0c3b96-k8s-coredns--674b8bbfcf--26vdn-eth0 coredns-674b8bbfcf- kube-system 199a61b9-0789-47e3-ae0e-1db7815072b1 846 0 2026-01-16 18:03:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4580-0-0-p-f44e0c3b96 coredns-674b8bbfcf-26vdn eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] caliee2897634c4 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="aa1b77ffb2dcc31d02b41f9502f2b14f425e13ad1dd213cd31277c9661cdefe0" Namespace="kube-system" Pod="coredns-674b8bbfcf-26vdn" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-coredns--674b8bbfcf--26vdn-" Jan 16 18:03:57.359769 containerd[1593]: 2026-01-16 18:03:57.177 [INFO][4420] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="aa1b77ffb2dcc31d02b41f9502f2b14f425e13ad1dd213cd31277c9661cdefe0" Namespace="kube-system" Pod="coredns-674b8bbfcf-26vdn" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-coredns--674b8bbfcf--26vdn-eth0" Jan 16 18:03:57.359769 containerd[1593]: 2026-01-16 18:03:57.232 [INFO][4431] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="aa1b77ffb2dcc31d02b41f9502f2b14f425e13ad1dd213cd31277c9661cdefe0" HandleID="k8s-pod-network.aa1b77ffb2dcc31d02b41f9502f2b14f425e13ad1dd213cd31277c9661cdefe0" Workload="ci--4580--0--0--p--f44e0c3b96-k8s-coredns--674b8bbfcf--26vdn-eth0" Jan 16 18:03:57.360019 containerd[1593]: 2026-01-16 18:03:57.233 [INFO][4431] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="aa1b77ffb2dcc31d02b41f9502f2b14f425e13ad1dd213cd31277c9661cdefe0" HandleID="k8s-pod-network.aa1b77ffb2dcc31d02b41f9502f2b14f425e13ad1dd213cd31277c9661cdefe0" Workload="ci--4580--0--0--p--f44e0c3b96-k8s-coredns--674b8bbfcf--26vdn-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3010), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4580-0-0-p-f44e0c3b96", "pod":"coredns-674b8bbfcf-26vdn", "timestamp":"2026-01-16 18:03:57.23245453 +0000 UTC"}, Hostname:"ci-4580-0-0-p-f44e0c3b96", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 18:03:57.360019 containerd[1593]: 2026-01-16 18:03:57.234 [INFO][4431] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 18:03:57.360019 containerd[1593]: 2026-01-16 18:03:57.234 [INFO][4431] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 18:03:57.360019 containerd[1593]: 2026-01-16 18:03:57.234 [INFO][4431] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4580-0-0-p-f44e0c3b96' Jan 16 18:03:57.360019 containerd[1593]: 2026-01-16 18:03:57.250 [INFO][4431] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.aa1b77ffb2dcc31d02b41f9502f2b14f425e13ad1dd213cd31277c9661cdefe0" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:57.360019 containerd[1593]: 2026-01-16 18:03:57.269 [INFO][4431] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:57.360019 containerd[1593]: 2026-01-16 18:03:57.276 [INFO][4431] ipam/ipam.go 511: Trying affinity for 192.168.28.0/26 host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:57.360019 containerd[1593]: 2026-01-16 18:03:57.287 [INFO][4431] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.0/26 host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:57.360019 containerd[1593]: 2026-01-16 18:03:57.292 [INFO][4431] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.0/26 host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:57.360248 containerd[1593]: 2026-01-16 18:03:57.292 [INFO][4431] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.28.0/26 handle="k8s-pod-network.aa1b77ffb2dcc31d02b41f9502f2b14f425e13ad1dd213cd31277c9661cdefe0" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:57.360248 containerd[1593]: 2026-01-16 18:03:57.298 [INFO][4431] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.aa1b77ffb2dcc31d02b41f9502f2b14f425e13ad1dd213cd31277c9661cdefe0 Jan 16 18:03:57.360248 containerd[1593]: 2026-01-16 18:03:57.309 [INFO][4431] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.28.0/26 handle="k8s-pod-network.aa1b77ffb2dcc31d02b41f9502f2b14f425e13ad1dd213cd31277c9661cdefe0" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:57.360248 containerd[1593]: 2026-01-16 18:03:57.318 [INFO][4431] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.28.4/26] block=192.168.28.0/26 handle="k8s-pod-network.aa1b77ffb2dcc31d02b41f9502f2b14f425e13ad1dd213cd31277c9661cdefe0" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:57.360248 containerd[1593]: 2026-01-16 18:03:57.318 [INFO][4431] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.4/26] handle="k8s-pod-network.aa1b77ffb2dcc31d02b41f9502f2b14f425e13ad1dd213cd31277c9661cdefe0" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:57.360248 containerd[1593]: 2026-01-16 18:03:57.318 [INFO][4431] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 18:03:57.360248 containerd[1593]: 2026-01-16 18:03:57.319 [INFO][4431] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.28.4/26] IPv6=[] ContainerID="aa1b77ffb2dcc31d02b41f9502f2b14f425e13ad1dd213cd31277c9661cdefe0" HandleID="k8s-pod-network.aa1b77ffb2dcc31d02b41f9502f2b14f425e13ad1dd213cd31277c9661cdefe0" Workload="ci--4580--0--0--p--f44e0c3b96-k8s-coredns--674b8bbfcf--26vdn-eth0" Jan 16 18:03:57.360793 containerd[1593]: 2026-01-16 18:03:57.324 [INFO][4420] cni-plugin/k8s.go 418: Populated endpoint ContainerID="aa1b77ffb2dcc31d02b41f9502f2b14f425e13ad1dd213cd31277c9661cdefe0" Namespace="kube-system" Pod="coredns-674b8bbfcf-26vdn" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-coredns--674b8bbfcf--26vdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580--0--0--p--f44e0c3b96-k8s-coredns--674b8bbfcf--26vdn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"199a61b9-0789-47e3-ae0e-1db7815072b1", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 3, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580-0-0-p-f44e0c3b96", ContainerID:"", Pod:"coredns-674b8bbfcf-26vdn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliee2897634c4", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:03:57.360793 containerd[1593]: 2026-01-16 18:03:57.327 [INFO][4420] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.4/32] ContainerID="aa1b77ffb2dcc31d02b41f9502f2b14f425e13ad1dd213cd31277c9661cdefe0" Namespace="kube-system" Pod="coredns-674b8bbfcf-26vdn" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-coredns--674b8bbfcf--26vdn-eth0" Jan 16 18:03:57.360793 containerd[1593]: 2026-01-16 18:03:57.327 [INFO][4420] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to caliee2897634c4 ContainerID="aa1b77ffb2dcc31d02b41f9502f2b14f425e13ad1dd213cd31277c9661cdefe0" Namespace="kube-system" Pod="coredns-674b8bbfcf-26vdn" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-coredns--674b8bbfcf--26vdn-eth0" Jan 16 18:03:57.360793 containerd[1593]: 2026-01-16 18:03:57.336 [INFO][4420] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="aa1b77ffb2dcc31d02b41f9502f2b14f425e13ad1dd213cd31277c9661cdefe0" Namespace="kube-system" Pod="coredns-674b8bbfcf-26vdn" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-coredns--674b8bbfcf--26vdn-eth0" Jan 16 18:03:57.360793 containerd[1593]: 2026-01-16 18:03:57.337 [INFO][4420] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="aa1b77ffb2dcc31d02b41f9502f2b14f425e13ad1dd213cd31277c9661cdefe0" Namespace="kube-system" Pod="coredns-674b8bbfcf-26vdn" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-coredns--674b8bbfcf--26vdn-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580--0--0--p--f44e0c3b96-k8s-coredns--674b8bbfcf--26vdn-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"199a61b9-0789-47e3-ae0e-1db7815072b1", ResourceVersion:"846", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 3, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580-0-0-p-f44e0c3b96", ContainerID:"aa1b77ffb2dcc31d02b41f9502f2b14f425e13ad1dd213cd31277c9661cdefe0", Pod:"coredns-674b8bbfcf-26vdn", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"caliee2897634c4", MAC:"d2:56:90:09:40:dd", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:03:57.360793 containerd[1593]: 2026-01-16 18:03:57.354 [INFO][4420] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="aa1b77ffb2dcc31d02b41f9502f2b14f425e13ad1dd213cd31277c9661cdefe0" Namespace="kube-system" Pod="coredns-674b8bbfcf-26vdn" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-coredns--674b8bbfcf--26vdn-eth0" Jan 16 18:03:57.378355 kubelet[2847]: E0116 18:03:57.378295 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7995d6d9cc-4t5v5" podUID="803aa494-41ae-4e2d-9b86-fe41697d291d" Jan 16 18:03:57.383088 kubelet[2847]: E0116 18:03:57.382862 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-lxhzl" podUID="acf5809a-b3e8-41d0-86b6-edf701abdda5" Jan 16 18:03:57.417439 containerd[1593]: time="2026-01-16T18:03:57.417135101Z" level=info msg="connecting to shim aa1b77ffb2dcc31d02b41f9502f2b14f425e13ad1dd213cd31277c9661cdefe0" address="unix:///run/containerd/s/36d9063f48028a8e9013445621759b8966a2329288f2188a6532041165920261" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:03:57.424000 audit[4455]: NETFILTER_CFG table=filter:129 family=2 entries=46 op=nft_register_chain pid=4455 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:03:57.424000 audit[4455]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23724 a0=3 a1=ffffee570500 a2=0 a3=ffffa9ff1fa8 items=0 ppid=4181 pid=4455 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:57.424000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:03:57.469570 systemd[1]: Started cri-containerd-aa1b77ffb2dcc31d02b41f9502f2b14f425e13ad1dd213cd31277c9661cdefe0.scope - libcontainer container aa1b77ffb2dcc31d02b41f9502f2b14f425e13ad1dd213cd31277c9661cdefe0. Jan 16 18:03:57.483000 audit: BPF prog-id=219 op=LOAD Jan 16 18:03:57.484000 audit: BPF prog-id=220 op=LOAD Jan 16 18:03:57.484000 audit[4468]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=4457 pid=4468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:57.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161316237376666623264636333316430326234316639353032663262 Jan 16 18:03:57.484000 audit: BPF prog-id=220 op=UNLOAD Jan 16 18:03:57.484000 audit[4468]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4457 pid=4468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:57.484000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161316237376666623264636333316430326234316639353032663262 Jan 16 18:03:57.485000 audit: BPF prog-id=221 op=LOAD Jan 16 18:03:57.485000 audit[4468]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=4457 pid=4468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:57.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161316237376666623264636333316430326234316639353032663262 Jan 16 18:03:57.485000 audit: BPF prog-id=222 op=LOAD Jan 16 18:03:57.485000 audit[4468]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=4457 pid=4468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:57.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161316237376666623264636333316430326234316639353032663262 Jan 16 18:03:57.485000 audit: BPF prog-id=222 op=UNLOAD Jan 16 18:03:57.485000 audit[4468]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4457 pid=4468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:57.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161316237376666623264636333316430326234316639353032663262 Jan 16 18:03:57.485000 audit: BPF prog-id=221 op=UNLOAD Jan 16 18:03:57.485000 audit[4468]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4457 pid=4468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:57.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161316237376666623264636333316430326234316639353032663262 Jan 16 18:03:57.485000 audit: BPF prog-id=223 op=LOAD Jan 16 18:03:57.485000 audit[4468]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=4457 pid=4468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:57.485000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6161316237376666623264636333316430326234316639353032663262 Jan 16 18:03:57.496000 audit[4488]: NETFILTER_CFG table=filter:130 family=2 entries=20 op=nft_register_rule pid=4488 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:03:57.496000 audit[4488]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff2e50bb0 a2=0 a3=1 items=0 ppid=2953 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:57.496000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:03:57.501000 audit[4488]: NETFILTER_CFG table=nat:131 family=2 entries=14 op=nft_register_rule pid=4488 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:03:57.501000 audit[4488]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=fffff2e50bb0 a2=0 a3=1 items=0 ppid=2953 pid=4488 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:57.501000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:03:57.519042 containerd[1593]: time="2026-01-16T18:03:57.518049395Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-26vdn,Uid:199a61b9-0789-47e3-ae0e-1db7815072b1,Namespace:kube-system,Attempt:0,} returns sandbox id \"aa1b77ffb2dcc31d02b41f9502f2b14f425e13ad1dd213cd31277c9661cdefe0\"" Jan 16 18:03:57.525381 systemd-networkd[1472]: vxlan.calico: Gained IPv6LL Jan 16 18:03:57.529377 containerd[1593]: time="2026-01-16T18:03:57.528002656Z" level=info msg="CreateContainer within sandbox \"aa1b77ffb2dcc31d02b41f9502f2b14f425e13ad1dd213cd31277c9661cdefe0\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 18:03:57.550389 containerd[1593]: time="2026-01-16T18:03:57.550337461Z" level=info msg="Container f58e61293bdde3eab3000595c6e427fe4c32afcbf805b9e05f4d5e5ed4aca036: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:03:57.558693 containerd[1593]: time="2026-01-16T18:03:57.558623185Z" level=info msg="CreateContainer within sandbox \"aa1b77ffb2dcc31d02b41f9502f2b14f425e13ad1dd213cd31277c9661cdefe0\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"f58e61293bdde3eab3000595c6e427fe4c32afcbf805b9e05f4d5e5ed4aca036\"" Jan 16 18:03:57.561171 containerd[1593]: time="2026-01-16T18:03:57.559430172Z" level=info msg="StartContainer for \"f58e61293bdde3eab3000595c6e427fe4c32afcbf805b9e05f4d5e5ed4aca036\"" Jan 16 18:03:57.563454 containerd[1593]: time="2026-01-16T18:03:57.563104955Z" level=info msg="connecting to shim f58e61293bdde3eab3000595c6e427fe4c32afcbf805b9e05f4d5e5ed4aca036" address="unix:///run/containerd/s/36d9063f48028a8e9013445621759b8966a2329288f2188a6532041165920261" protocol=ttrpc version=3 Jan 16 18:03:57.587229 systemd[1]: Started cri-containerd-f58e61293bdde3eab3000595c6e427fe4c32afcbf805b9e05f4d5e5ed4aca036.scope - libcontainer container f58e61293bdde3eab3000595c6e427fe4c32afcbf805b9e05f4d5e5ed4aca036. Jan 16 18:03:57.603000 audit: BPF prog-id=224 op=LOAD Jan 16 18:03:57.604000 audit: BPF prog-id=225 op=LOAD Jan 16 18:03:57.604000 audit[4496]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4457 pid=4496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:57.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635386536313239336264646533656162333030303539356336653432 Jan 16 18:03:57.604000 audit: BPF prog-id=225 op=UNLOAD Jan 16 18:03:57.604000 audit[4496]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4457 pid=4496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:57.604000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635386536313239336264646533656162333030303539356336653432 Jan 16 18:03:57.605000 audit: BPF prog-id=226 op=LOAD Jan 16 18:03:57.605000 audit[4496]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4457 pid=4496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:57.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635386536313239336264646533656162333030303539356336653432 Jan 16 18:03:57.605000 audit: BPF prog-id=227 op=LOAD Jan 16 18:03:57.605000 audit[4496]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4457 pid=4496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:57.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635386536313239336264646533656162333030303539356336653432 Jan 16 18:03:57.605000 audit: BPF prog-id=227 op=UNLOAD Jan 16 18:03:57.605000 audit[4496]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4457 pid=4496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:57.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635386536313239336264646533656162333030303539356336653432 Jan 16 18:03:57.605000 audit: BPF prog-id=226 op=UNLOAD Jan 16 18:03:57.605000 audit[4496]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4457 pid=4496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:57.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635386536313239336264646533656162333030303539356336653432 Jan 16 18:03:57.605000 audit: BPF prog-id=228 op=LOAD Jan 16 18:03:57.605000 audit[4496]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4457 pid=4496 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:57.605000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6635386536313239336264646533656162333030303539356336653432 Jan 16 18:03:57.635961 containerd[1593]: time="2026-01-16T18:03:57.635876645Z" level=info msg="StartContainer for \"f58e61293bdde3eab3000595c6e427fe4c32afcbf805b9e05f4d5e5ed4aca036\" returns successfully" Jan 16 18:03:58.101536 systemd-networkd[1472]: cali1339cc304f4: Gained IPv6LL Jan 16 18:03:58.293387 systemd-networkd[1472]: calib560d1e15f7: Gained IPv6LL Jan 16 18:03:58.387855 kubelet[2847]: E0116 18:03:58.387739 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7995d6d9cc-4t5v5" podUID="803aa494-41ae-4e2d-9b86-fe41697d291d" Jan 16 18:03:58.389129 kubelet[2847]: E0116 18:03:58.387874 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-lxhzl" podUID="acf5809a-b3e8-41d0-86b6-edf701abdda5" Jan 16 18:03:58.428654 kubelet[2847]: I0116 18:03:58.428574 2847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-26vdn" podStartSLOduration=44.428549079 podStartE2EDuration="44.428549079s" podCreationTimestamp="2026-01-16 18:03:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 18:03:58.407880801 +0000 UTC m=+49.439801132" watchObservedRunningTime="2026-01-16 18:03:58.428549079 +0000 UTC m=+49.460469410" Jan 16 18:03:58.435000 audit[4529]: NETFILTER_CFG table=filter:132 family=2 entries=20 op=nft_register_rule pid=4529 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:03:58.435000 audit[4529]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff7f979d0 a2=0 a3=1 items=0 ppid=2953 pid=4529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:58.435000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:03:58.439000 audit[4529]: NETFILTER_CFG table=nat:133 family=2 entries=14 op=nft_register_rule pid=4529 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:03:58.439000 audit[4529]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=fffff7f979d0 a2=0 a3=1 items=0 ppid=2953 pid=4529 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:58.439000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:03:58.677128 systemd-networkd[1472]: caliee2897634c4: Gained IPv6LL Jan 16 18:03:59.100783 containerd[1593]: time="2026-01-16T18:03:59.100080607Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x7tcq,Uid:c7bb61dc-6bdf-4bf7-9a33-c67b671e2820,Namespace:calico-system,Attempt:0,}" Jan 16 18:03:59.102476 containerd[1593]: time="2026-01-16T18:03:59.102041644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pr96z,Uid:c5d82fbc-6b84-4542-a0ba-6775b07ac632,Namespace:kube-system,Attempt:0,}" Jan 16 18:03:59.102476 containerd[1593]: time="2026-01-16T18:03:59.102130171Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-zmqhv,Uid:29a43599-84fe-43df-8ccd-cc9d89aeeb36,Namespace:calico-system,Attempt:0,}" Jan 16 18:03:59.347826 systemd-networkd[1472]: calie27caa42560: Link UP Jan 16 18:03:59.350762 systemd-networkd[1472]: calie27caa42560: Gained carrier Jan 16 18:03:59.375871 containerd[1593]: 2026-01-16 18:03:59.192 [INFO][4542] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4580--0--0--p--f44e0c3b96-k8s-coredns--674b8bbfcf--pr96z-eth0 coredns-674b8bbfcf- kube-system c5d82fbc-6b84-4542-a0ba-6775b07ac632 849 0 2026-01-16 18:03:14 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4580-0-0-p-f44e0c3b96 coredns-674b8bbfcf-pr96z eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calie27caa42560 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="49d7d3acab9a27798404f12b11e969127d092ac58cb59d3105c19c079c4a752f" Namespace="kube-system" Pod="coredns-674b8bbfcf-pr96z" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-coredns--674b8bbfcf--pr96z-" Jan 16 18:03:59.375871 containerd[1593]: 2026-01-16 18:03:59.193 [INFO][4542] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="49d7d3acab9a27798404f12b11e969127d092ac58cb59d3105c19c079c4a752f" Namespace="kube-system" Pod="coredns-674b8bbfcf-pr96z" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-coredns--674b8bbfcf--pr96z-eth0" Jan 16 18:03:59.375871 containerd[1593]: 2026-01-16 18:03:59.266 [INFO][4564] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49d7d3acab9a27798404f12b11e969127d092ac58cb59d3105c19c079c4a752f" HandleID="k8s-pod-network.49d7d3acab9a27798404f12b11e969127d092ac58cb59d3105c19c079c4a752f" Workload="ci--4580--0--0--p--f44e0c3b96-k8s-coredns--674b8bbfcf--pr96z-eth0" Jan 16 18:03:59.375871 containerd[1593]: 2026-01-16 18:03:59.266 [INFO][4564] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="49d7d3acab9a27798404f12b11e969127d092ac58cb59d3105c19c079c4a752f" HandleID="k8s-pod-network.49d7d3acab9a27798404f12b11e969127d092ac58cb59d3105c19c079c4a752f" Workload="ci--4580--0--0--p--f44e0c3b96-k8s-coredns--674b8bbfcf--pr96z-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b650), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4580-0-0-p-f44e0c3b96", "pod":"coredns-674b8bbfcf-pr96z", "timestamp":"2026-01-16 18:03:59.266063434 +0000 UTC"}, Hostname:"ci-4580-0-0-p-f44e0c3b96", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 18:03:59.375871 containerd[1593]: 2026-01-16 18:03:59.267 [INFO][4564] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 18:03:59.375871 containerd[1593]: 2026-01-16 18:03:59.267 [INFO][4564] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 18:03:59.375871 containerd[1593]: 2026-01-16 18:03:59.267 [INFO][4564] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4580-0-0-p-f44e0c3b96' Jan 16 18:03:59.375871 containerd[1593]: 2026-01-16 18:03:59.285 [INFO][4564] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.49d7d3acab9a27798404f12b11e969127d092ac58cb59d3105c19c079c4a752f" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:59.375871 containerd[1593]: 2026-01-16 18:03:59.294 [INFO][4564] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:59.375871 containerd[1593]: 2026-01-16 18:03:59.300 [INFO][4564] ipam/ipam.go 511: Trying affinity for 192.168.28.0/26 host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:59.375871 containerd[1593]: 2026-01-16 18:03:59.303 [INFO][4564] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.0/26 host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:59.375871 containerd[1593]: 2026-01-16 18:03:59.307 [INFO][4564] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.0/26 host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:59.375871 containerd[1593]: 2026-01-16 18:03:59.307 [INFO][4564] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.28.0/26 handle="k8s-pod-network.49d7d3acab9a27798404f12b11e969127d092ac58cb59d3105c19c079c4a752f" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:59.375871 containerd[1593]: 2026-01-16 18:03:59.309 [INFO][4564] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.49d7d3acab9a27798404f12b11e969127d092ac58cb59d3105c19c079c4a752f Jan 16 18:03:59.375871 containerd[1593]: 2026-01-16 18:03:59.317 [INFO][4564] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.28.0/26 handle="k8s-pod-network.49d7d3acab9a27798404f12b11e969127d092ac58cb59d3105c19c079c4a752f" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:59.375871 containerd[1593]: 2026-01-16 18:03:59.335 [INFO][4564] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.28.5/26] block=192.168.28.0/26 handle="k8s-pod-network.49d7d3acab9a27798404f12b11e969127d092ac58cb59d3105c19c079c4a752f" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:59.375871 containerd[1593]: 2026-01-16 18:03:59.335 [INFO][4564] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.5/26] handle="k8s-pod-network.49d7d3acab9a27798404f12b11e969127d092ac58cb59d3105c19c079c4a752f" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:59.375871 containerd[1593]: 2026-01-16 18:03:59.335 [INFO][4564] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 18:03:59.375871 containerd[1593]: 2026-01-16 18:03:59.335 [INFO][4564] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.28.5/26] IPv6=[] ContainerID="49d7d3acab9a27798404f12b11e969127d092ac58cb59d3105c19c079c4a752f" HandleID="k8s-pod-network.49d7d3acab9a27798404f12b11e969127d092ac58cb59d3105c19c079c4a752f" Workload="ci--4580--0--0--p--f44e0c3b96-k8s-coredns--674b8bbfcf--pr96z-eth0" Jan 16 18:03:59.376632 containerd[1593]: 2026-01-16 18:03:59.338 [INFO][4542] cni-plugin/k8s.go 418: Populated endpoint ContainerID="49d7d3acab9a27798404f12b11e969127d092ac58cb59d3105c19c079c4a752f" Namespace="kube-system" Pod="coredns-674b8bbfcf-pr96z" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-coredns--674b8bbfcf--pr96z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580--0--0--p--f44e0c3b96-k8s-coredns--674b8bbfcf--pr96z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c5d82fbc-6b84-4542-a0ba-6775b07ac632", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 3, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580-0-0-p-f44e0c3b96", ContainerID:"", Pod:"coredns-674b8bbfcf-pr96z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie27caa42560", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:03:59.376632 containerd[1593]: 2026-01-16 18:03:59.338 [INFO][4542] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.5/32] ContainerID="49d7d3acab9a27798404f12b11e969127d092ac58cb59d3105c19c079c4a752f" Namespace="kube-system" Pod="coredns-674b8bbfcf-pr96z" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-coredns--674b8bbfcf--pr96z-eth0" Jan 16 18:03:59.376632 containerd[1593]: 2026-01-16 18:03:59.338 [INFO][4542] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie27caa42560 ContainerID="49d7d3acab9a27798404f12b11e969127d092ac58cb59d3105c19c079c4a752f" Namespace="kube-system" Pod="coredns-674b8bbfcf-pr96z" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-coredns--674b8bbfcf--pr96z-eth0" Jan 16 18:03:59.376632 containerd[1593]: 2026-01-16 18:03:59.353 [INFO][4542] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49d7d3acab9a27798404f12b11e969127d092ac58cb59d3105c19c079c4a752f" Namespace="kube-system" Pod="coredns-674b8bbfcf-pr96z" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-coredns--674b8bbfcf--pr96z-eth0" Jan 16 18:03:59.376632 containerd[1593]: 2026-01-16 18:03:59.356 [INFO][4542] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="49d7d3acab9a27798404f12b11e969127d092ac58cb59d3105c19c079c4a752f" Namespace="kube-system" Pod="coredns-674b8bbfcf-pr96z" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-coredns--674b8bbfcf--pr96z-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580--0--0--p--f44e0c3b96-k8s-coredns--674b8bbfcf--pr96z-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"c5d82fbc-6b84-4542-a0ba-6775b07ac632", ResourceVersion:"849", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 3, 14, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580-0-0-p-f44e0c3b96", ContainerID:"49d7d3acab9a27798404f12b11e969127d092ac58cb59d3105c19c079c4a752f", Pod:"coredns-674b8bbfcf-pr96z", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.28.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calie27caa42560", MAC:"3a:44:09:20:44:eb", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:03:59.376632 containerd[1593]: 2026-01-16 18:03:59.368 [INFO][4542] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="49d7d3acab9a27798404f12b11e969127d092ac58cb59d3105c19c079c4a752f" Namespace="kube-system" Pod="coredns-674b8bbfcf-pr96z" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-coredns--674b8bbfcf--pr96z-eth0" Jan 16 18:03:59.433000 audit[4602]: NETFILTER_CFG table=filter:134 family=2 entries=46 op=nft_register_chain pid=4602 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:03:59.433000 audit[4602]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23180 a0=3 a1=ffffce2e4030 a2=0 a3=ffffb3adbfa8 items=0 ppid=4181 pid=4602 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:59.433000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:03:59.459101 containerd[1593]: time="2026-01-16T18:03:59.459038058Z" level=info msg="connecting to shim 49d7d3acab9a27798404f12b11e969127d092ac58cb59d3105c19c079c4a752f" address="unix:///run/containerd/s/4874158362f106ebe119de7f426dcb10bf94621ae2e259580c3d641372d6f56d" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:03:59.520156 systemd[1]: Started cri-containerd-49d7d3acab9a27798404f12b11e969127d092ac58cb59d3105c19c079c4a752f.scope - libcontainer container 49d7d3acab9a27798404f12b11e969127d092ac58cb59d3105c19c079c4a752f. Jan 16 18:03:59.529788 systemd-networkd[1472]: calic50072d36ac: Link UP Jan 16 18:03:59.538182 systemd-networkd[1472]: calic50072d36ac: Gained carrier Jan 16 18:03:59.543000 audit[4630]: NETFILTER_CFG table=filter:135 family=2 entries=17 op=nft_register_rule pid=4630 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:03:59.543000 audit[4630]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd9a38320 a2=0 a3=1 items=0 ppid=2953 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:59.543000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:03:59.550000 audit[4630]: NETFILTER_CFG table=nat:136 family=2 entries=35 op=nft_register_chain pid=4630 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:03:59.553000 audit: BPF prog-id=229 op=LOAD Jan 16 18:03:59.550000 audit[4630]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffd9a38320 a2=0 a3=1 items=0 ppid=2953 pid=4630 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:59.550000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:03:59.556000 audit: BPF prog-id=230 op=LOAD Jan 16 18:03:59.556000 audit[4618]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000220180 a2=98 a3=0 items=0 ppid=4608 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:59.556000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439643764336163616239613237373938343034663132623131653936 Jan 16 18:03:59.557000 audit: BPF prog-id=230 op=UNLOAD Jan 16 18:03:59.557000 audit[4618]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4608 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:59.557000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439643764336163616239613237373938343034663132623131653936 Jan 16 18:03:59.557000 audit: BPF prog-id=231 op=LOAD Jan 16 18:03:59.557000 audit[4618]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40002203e8 a2=98 a3=0 items=0 ppid=4608 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:59.557000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439643764336163616239613237373938343034663132623131653936 Jan 16 18:03:59.557000 audit: BPF prog-id=232 op=LOAD Jan 16 18:03:59.557000 audit[4618]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000220168 a2=98 a3=0 items=0 ppid=4608 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:59.557000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439643764336163616239613237373938343034663132623131653936 Jan 16 18:03:59.558000 audit: BPF prog-id=232 op=UNLOAD Jan 16 18:03:59.558000 audit[4618]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4608 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:59.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439643764336163616239613237373938343034663132623131653936 Jan 16 18:03:59.558000 audit: BPF prog-id=231 op=UNLOAD Jan 16 18:03:59.558000 audit[4618]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4608 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:59.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439643764336163616239613237373938343034663132623131653936 Jan 16 18:03:59.558000 audit: BPF prog-id=233 op=LOAD Jan 16 18:03:59.558000 audit[4618]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000220648 a2=98 a3=0 items=0 ppid=4608 pid=4618 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:59.558000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439643764336163616239613237373938343034663132623131653936 Jan 16 18:03:59.603201 containerd[1593]: 2026-01-16 18:03:59.243 [INFO][4546] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4580--0--0--p--f44e0c3b96-k8s-goldmane--666569f655--zmqhv-eth0 goldmane-666569f655- calico-system 29a43599-84fe-43df-8ccd-cc9d89aeeb36 857 0 2026-01-16 18:03:33 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4580-0-0-p-f44e0c3b96 goldmane-666569f655-zmqhv eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic50072d36ac [] [] }} ContainerID="3e26fabaa9e38682200fa4a62594c598644e4f90ed95b1d63c402ab16da466b4" Namespace="calico-system" Pod="goldmane-666569f655-zmqhv" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-goldmane--666569f655--zmqhv-" Jan 16 18:03:59.603201 containerd[1593]: 2026-01-16 18:03:59.243 [INFO][4546] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="3e26fabaa9e38682200fa4a62594c598644e4f90ed95b1d63c402ab16da466b4" Namespace="calico-system" Pod="goldmane-666569f655-zmqhv" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-goldmane--666569f655--zmqhv-eth0" Jan 16 18:03:59.603201 containerd[1593]: 2026-01-16 18:03:59.285 [INFO][4573] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="3e26fabaa9e38682200fa4a62594c598644e4f90ed95b1d63c402ab16da466b4" HandleID="k8s-pod-network.3e26fabaa9e38682200fa4a62594c598644e4f90ed95b1d63c402ab16da466b4" Workload="ci--4580--0--0--p--f44e0c3b96-k8s-goldmane--666569f655--zmqhv-eth0" Jan 16 18:03:59.603201 containerd[1593]: 2026-01-16 18:03:59.285 [INFO][4573] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="3e26fabaa9e38682200fa4a62594c598644e4f90ed95b1d63c402ab16da466b4" HandleID="k8s-pod-network.3e26fabaa9e38682200fa4a62594c598644e4f90ed95b1d63c402ab16da466b4" Workload="ci--4580--0--0--p--f44e0c3b96-k8s-goldmane--666569f655--zmqhv-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000255590), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4580-0-0-p-f44e0c3b96", "pod":"goldmane-666569f655-zmqhv", "timestamp":"2026-01-16 18:03:59.285098755 +0000 UTC"}, Hostname:"ci-4580-0-0-p-f44e0c3b96", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 18:03:59.603201 containerd[1593]: 2026-01-16 18:03:59.285 [INFO][4573] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 18:03:59.603201 containerd[1593]: 2026-01-16 18:03:59.335 [INFO][4573] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 18:03:59.603201 containerd[1593]: 2026-01-16 18:03:59.335 [INFO][4573] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4580-0-0-p-f44e0c3b96' Jan 16 18:03:59.603201 containerd[1593]: 2026-01-16 18:03:59.389 [INFO][4573] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.3e26fabaa9e38682200fa4a62594c598644e4f90ed95b1d63c402ab16da466b4" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:59.603201 containerd[1593]: 2026-01-16 18:03:59.410 [INFO][4573] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:59.603201 containerd[1593]: 2026-01-16 18:03:59.445 [INFO][4573] ipam/ipam.go 511: Trying affinity for 192.168.28.0/26 host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:59.603201 containerd[1593]: 2026-01-16 18:03:59.449 [INFO][4573] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.0/26 host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:59.603201 containerd[1593]: 2026-01-16 18:03:59.459 [INFO][4573] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.0/26 host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:59.603201 containerd[1593]: 2026-01-16 18:03:59.460 [INFO][4573] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.28.0/26 handle="k8s-pod-network.3e26fabaa9e38682200fa4a62594c598644e4f90ed95b1d63c402ab16da466b4" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:59.603201 containerd[1593]: 2026-01-16 18:03:59.466 [INFO][4573] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.3e26fabaa9e38682200fa4a62594c598644e4f90ed95b1d63c402ab16da466b4 Jan 16 18:03:59.603201 containerd[1593]: 2026-01-16 18:03:59.488 [INFO][4573] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.28.0/26 handle="k8s-pod-network.3e26fabaa9e38682200fa4a62594c598644e4f90ed95b1d63c402ab16da466b4" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:59.603201 containerd[1593]: 2026-01-16 18:03:59.498 [INFO][4573] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.28.6/26] block=192.168.28.0/26 handle="k8s-pod-network.3e26fabaa9e38682200fa4a62594c598644e4f90ed95b1d63c402ab16da466b4" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:59.603201 containerd[1593]: 2026-01-16 18:03:59.498 [INFO][4573] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.6/26] handle="k8s-pod-network.3e26fabaa9e38682200fa4a62594c598644e4f90ed95b1d63c402ab16da466b4" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:59.603201 containerd[1593]: 2026-01-16 18:03:59.498 [INFO][4573] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 18:03:59.603201 containerd[1593]: 2026-01-16 18:03:59.498 [INFO][4573] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.28.6/26] IPv6=[] ContainerID="3e26fabaa9e38682200fa4a62594c598644e4f90ed95b1d63c402ab16da466b4" HandleID="k8s-pod-network.3e26fabaa9e38682200fa4a62594c598644e4f90ed95b1d63c402ab16da466b4" Workload="ci--4580--0--0--p--f44e0c3b96-k8s-goldmane--666569f655--zmqhv-eth0" Jan 16 18:03:59.603795 containerd[1593]: 2026-01-16 18:03:59.506 [INFO][4546] cni-plugin/k8s.go 418: Populated endpoint ContainerID="3e26fabaa9e38682200fa4a62594c598644e4f90ed95b1d63c402ab16da466b4" Namespace="calico-system" Pod="goldmane-666569f655-zmqhv" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-goldmane--666569f655--zmqhv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580--0--0--p--f44e0c3b96-k8s-goldmane--666569f655--zmqhv-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"29a43599-84fe-43df-8ccd-cc9d89aeeb36", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 3, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580-0-0-p-f44e0c3b96", ContainerID:"", Pod:"goldmane-666569f655-zmqhv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.28.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic50072d36ac", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:03:59.603795 containerd[1593]: 2026-01-16 18:03:59.506 [INFO][4546] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.6/32] ContainerID="3e26fabaa9e38682200fa4a62594c598644e4f90ed95b1d63c402ab16da466b4" Namespace="calico-system" Pod="goldmane-666569f655-zmqhv" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-goldmane--666569f655--zmqhv-eth0" Jan 16 18:03:59.603795 containerd[1593]: 2026-01-16 18:03:59.506 [INFO][4546] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic50072d36ac ContainerID="3e26fabaa9e38682200fa4a62594c598644e4f90ed95b1d63c402ab16da466b4" Namespace="calico-system" Pod="goldmane-666569f655-zmqhv" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-goldmane--666569f655--zmqhv-eth0" Jan 16 18:03:59.603795 containerd[1593]: 2026-01-16 18:03:59.543 [INFO][4546] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="3e26fabaa9e38682200fa4a62594c598644e4f90ed95b1d63c402ab16da466b4" Namespace="calico-system" Pod="goldmane-666569f655-zmqhv" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-goldmane--666569f655--zmqhv-eth0" Jan 16 18:03:59.603795 containerd[1593]: 2026-01-16 18:03:59.544 [INFO][4546] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="3e26fabaa9e38682200fa4a62594c598644e4f90ed95b1d63c402ab16da466b4" Namespace="calico-system" Pod="goldmane-666569f655-zmqhv" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-goldmane--666569f655--zmqhv-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580--0--0--p--f44e0c3b96-k8s-goldmane--666569f655--zmqhv-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"29a43599-84fe-43df-8ccd-cc9d89aeeb36", ResourceVersion:"857", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 3, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580-0-0-p-f44e0c3b96", ContainerID:"3e26fabaa9e38682200fa4a62594c598644e4f90ed95b1d63c402ab16da466b4", Pod:"goldmane-666569f655-zmqhv", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.28.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic50072d36ac", MAC:"92:a0:11:9e:7d:da", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:03:59.603795 containerd[1593]: 2026-01-16 18:03:59.574 [INFO][4546] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="3e26fabaa9e38682200fa4a62594c598644e4f90ed95b1d63c402ab16da466b4" Namespace="calico-system" Pod="goldmane-666569f655-zmqhv" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-goldmane--666569f655--zmqhv-eth0" Jan 16 18:03:59.640122 systemd-networkd[1472]: cali8015456143a: Link UP Jan 16 18:03:59.643553 systemd-networkd[1472]: cali8015456143a: Gained carrier Jan 16 18:03:59.675517 containerd[1593]: 2026-01-16 18:03:59.229 [INFO][4530] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4580--0--0--p--f44e0c3b96-k8s-csi--node--driver--x7tcq-eth0 csi-node-driver- calico-system c7bb61dc-6bdf-4bf7-9a33-c67b671e2820 755 0 2026-01-16 18:03:37 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4580-0-0-p-f44e0c3b96 csi-node-driver-x7tcq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali8015456143a [] [] }} ContainerID="f37a83639b0296a17a8d1af2007643e13c45590fd98826242357d79f0e72d9da" Namespace="calico-system" Pod="csi-node-driver-x7tcq" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-csi--node--driver--x7tcq-" Jan 16 18:03:59.675517 containerd[1593]: 2026-01-16 18:03:59.229 [INFO][4530] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f37a83639b0296a17a8d1af2007643e13c45590fd98826242357d79f0e72d9da" Namespace="calico-system" Pod="csi-node-driver-x7tcq" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-csi--node--driver--x7tcq-eth0" Jan 16 18:03:59.675517 containerd[1593]: 2026-01-16 18:03:59.299 [INFO][4579] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f37a83639b0296a17a8d1af2007643e13c45590fd98826242357d79f0e72d9da" HandleID="k8s-pod-network.f37a83639b0296a17a8d1af2007643e13c45590fd98826242357d79f0e72d9da" Workload="ci--4580--0--0--p--f44e0c3b96-k8s-csi--node--driver--x7tcq-eth0" Jan 16 18:03:59.675517 containerd[1593]: 2026-01-16 18:03:59.299 [INFO][4579] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f37a83639b0296a17a8d1af2007643e13c45590fd98826242357d79f0e72d9da" HandleID="k8s-pod-network.f37a83639b0296a17a8d1af2007643e13c45590fd98826242357d79f0e72d9da" Workload="ci--4580--0--0--p--f44e0c3b96-k8s-csi--node--driver--x7tcq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4580-0-0-p-f44e0c3b96", "pod":"csi-node-driver-x7tcq", "timestamp":"2026-01-16 18:03:59.299052071 +0000 UTC"}, Hostname:"ci-4580-0-0-p-f44e0c3b96", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 18:03:59.675517 containerd[1593]: 2026-01-16 18:03:59.299 [INFO][4579] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 18:03:59.675517 containerd[1593]: 2026-01-16 18:03:59.499 [INFO][4579] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 18:03:59.675517 containerd[1593]: 2026-01-16 18:03:59.500 [INFO][4579] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4580-0-0-p-f44e0c3b96' Jan 16 18:03:59.675517 containerd[1593]: 2026-01-16 18:03:59.533 [INFO][4579] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f37a83639b0296a17a8d1af2007643e13c45590fd98826242357d79f0e72d9da" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:59.675517 containerd[1593]: 2026-01-16 18:03:59.550 [INFO][4579] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:59.675517 containerd[1593]: 2026-01-16 18:03:59.569 [INFO][4579] ipam/ipam.go 511: Trying affinity for 192.168.28.0/26 host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:59.675517 containerd[1593]: 2026-01-16 18:03:59.578 [INFO][4579] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.0/26 host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:59.675517 containerd[1593]: 2026-01-16 18:03:59.584 [INFO][4579] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.0/26 host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:59.675517 containerd[1593]: 2026-01-16 18:03:59.584 [INFO][4579] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.28.0/26 handle="k8s-pod-network.f37a83639b0296a17a8d1af2007643e13c45590fd98826242357d79f0e72d9da" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:59.675517 containerd[1593]: 2026-01-16 18:03:59.588 [INFO][4579] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f37a83639b0296a17a8d1af2007643e13c45590fd98826242357d79f0e72d9da Jan 16 18:03:59.675517 containerd[1593]: 2026-01-16 18:03:59.600 [INFO][4579] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.28.0/26 handle="k8s-pod-network.f37a83639b0296a17a8d1af2007643e13c45590fd98826242357d79f0e72d9da" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:59.675517 containerd[1593]: 2026-01-16 18:03:59.620 [INFO][4579] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.28.7/26] block=192.168.28.0/26 handle="k8s-pod-network.f37a83639b0296a17a8d1af2007643e13c45590fd98826242357d79f0e72d9da" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:59.675517 containerd[1593]: 2026-01-16 18:03:59.624 [INFO][4579] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.7/26] handle="k8s-pod-network.f37a83639b0296a17a8d1af2007643e13c45590fd98826242357d79f0e72d9da" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:03:59.675517 containerd[1593]: 2026-01-16 18:03:59.624 [INFO][4579] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 18:03:59.675517 containerd[1593]: 2026-01-16 18:03:59.624 [INFO][4579] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.28.7/26] IPv6=[] ContainerID="f37a83639b0296a17a8d1af2007643e13c45590fd98826242357d79f0e72d9da" HandleID="k8s-pod-network.f37a83639b0296a17a8d1af2007643e13c45590fd98826242357d79f0e72d9da" Workload="ci--4580--0--0--p--f44e0c3b96-k8s-csi--node--driver--x7tcq-eth0" Jan 16 18:03:59.677568 containerd[1593]: 2026-01-16 18:03:59.635 [INFO][4530] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f37a83639b0296a17a8d1af2007643e13c45590fd98826242357d79f0e72d9da" Namespace="calico-system" Pod="csi-node-driver-x7tcq" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-csi--node--driver--x7tcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580--0--0--p--f44e0c3b96-k8s-csi--node--driver--x7tcq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c7bb61dc-6bdf-4bf7-9a33-c67b671e2820", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 3, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580-0-0-p-f44e0c3b96", ContainerID:"", Pod:"csi-node-driver-x7tcq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.28.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8015456143a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:03:59.677568 containerd[1593]: 2026-01-16 18:03:59.635 [INFO][4530] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.7/32] ContainerID="f37a83639b0296a17a8d1af2007643e13c45590fd98826242357d79f0e72d9da" Namespace="calico-system" Pod="csi-node-driver-x7tcq" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-csi--node--driver--x7tcq-eth0" Jan 16 18:03:59.677568 containerd[1593]: 2026-01-16 18:03:59.635 [INFO][4530] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali8015456143a ContainerID="f37a83639b0296a17a8d1af2007643e13c45590fd98826242357d79f0e72d9da" Namespace="calico-system" Pod="csi-node-driver-x7tcq" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-csi--node--driver--x7tcq-eth0" Jan 16 18:03:59.677568 containerd[1593]: 2026-01-16 18:03:59.638 [INFO][4530] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f37a83639b0296a17a8d1af2007643e13c45590fd98826242357d79f0e72d9da" Namespace="calico-system" Pod="csi-node-driver-x7tcq" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-csi--node--driver--x7tcq-eth0" Jan 16 18:03:59.677568 containerd[1593]: 2026-01-16 18:03:59.639 [INFO][4530] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f37a83639b0296a17a8d1af2007643e13c45590fd98826242357d79f0e72d9da" Namespace="calico-system" Pod="csi-node-driver-x7tcq" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-csi--node--driver--x7tcq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580--0--0--p--f44e0c3b96-k8s-csi--node--driver--x7tcq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"c7bb61dc-6bdf-4bf7-9a33-c67b671e2820", ResourceVersion:"755", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 3, 37, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580-0-0-p-f44e0c3b96", ContainerID:"f37a83639b0296a17a8d1af2007643e13c45590fd98826242357d79f0e72d9da", Pod:"csi-node-driver-x7tcq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.28.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali8015456143a", MAC:"fe:e1:38:58:a1:54", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:03:59.677568 containerd[1593]: 2026-01-16 18:03:59.667 [INFO][4530] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f37a83639b0296a17a8d1af2007643e13c45590fd98826242357d79f0e72d9da" Namespace="calico-system" Pod="csi-node-driver-x7tcq" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-csi--node--driver--x7tcq-eth0" Jan 16 18:03:59.698219 containerd[1593]: time="2026-01-16T18:03:59.698180892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-pr96z,Uid:c5d82fbc-6b84-4542-a0ba-6775b07ac632,Namespace:kube-system,Attempt:0,} returns sandbox id \"49d7d3acab9a27798404f12b11e969127d092ac58cb59d3105c19c079c4a752f\"" Jan 16 18:03:59.710611 containerd[1593]: time="2026-01-16T18:03:59.710379907Z" level=info msg="CreateContainer within sandbox \"49d7d3acab9a27798404f12b11e969127d092ac58cb59d3105c19c079c4a752f\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 18:03:59.712000 audit[4667]: NETFILTER_CFG table=filter:137 family=2 entries=52 op=nft_register_chain pid=4667 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:03:59.712000 audit[4667]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=27524 a0=3 a1=ffffffb27960 a2=0 a3=ffff7fd81fa8 items=0 ppid=4181 pid=4667 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:59.712000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:03:59.721230 containerd[1593]: time="2026-01-16T18:03:59.721072882Z" level=info msg="connecting to shim 3e26fabaa9e38682200fa4a62594c598644e4f90ed95b1d63c402ab16da466b4" address="unix:///run/containerd/s/4eb1efd4f8dc8758c41657d08cac230249697b8779bdb81321a6e8d31cfca2ad" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:03:59.764065 containerd[1593]: time="2026-01-16T18:03:59.764010514Z" level=info msg="Container 09a3be6da6e41963a34ed4438730cc0cf9d548435718263a796ebea192674549: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:03:59.777322 containerd[1593]: time="2026-01-16T18:03:59.777254172Z" level=info msg="CreateContainer within sandbox \"49d7d3acab9a27798404f12b11e969127d092ac58cb59d3105c19c079c4a752f\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"09a3be6da6e41963a34ed4438730cc0cf9d548435718263a796ebea192674549\"" Jan 16 18:03:59.784244 containerd[1593]: time="2026-01-16T18:03:59.784178966Z" level=info msg="StartContainer for \"09a3be6da6e41963a34ed4438730cc0cf9d548435718263a796ebea192674549\"" Jan 16 18:03:59.786116 containerd[1593]: time="2026-01-16T18:03:59.786073277Z" level=info msg="connecting to shim 09a3be6da6e41963a34ed4438730cc0cf9d548435718263a796ebea192674549" address="unix:///run/containerd/s/4874158362f106ebe119de7f426dcb10bf94621ae2e259580c3d641372d6f56d" protocol=ttrpc version=3 Jan 16 18:03:59.792249 containerd[1593]: time="2026-01-16T18:03:59.790674645Z" level=info msg="connecting to shim f37a83639b0296a17a8d1af2007643e13c45590fd98826242357d79f0e72d9da" address="unix:///run/containerd/s/687048da3a630017030d5c88593c60d6910bd4dab9ab2380596e59bd916d7392" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:03:59.801276 systemd[1]: Started cri-containerd-3e26fabaa9e38682200fa4a62594c598644e4f90ed95b1d63c402ab16da466b4.scope - libcontainer container 3e26fabaa9e38682200fa4a62594c598644e4f90ed95b1d63c402ab16da466b4. Jan 16 18:03:59.804000 audit[4705]: NETFILTER_CFG table=filter:138 family=2 entries=48 op=nft_register_chain pid=4705 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:03:59.804000 audit[4705]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23108 a0=3 a1=ffffd06a91d0 a2=0 a3=ffffb6be5fa8 items=0 ppid=4181 pid=4705 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:59.804000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:03:59.819653 systemd[1]: Started cri-containerd-09a3be6da6e41963a34ed4438730cc0cf9d548435718263a796ebea192674549.scope - libcontainer container 09a3be6da6e41963a34ed4438730cc0cf9d548435718263a796ebea192674549. Jan 16 18:03:59.842371 systemd[1]: Started cri-containerd-f37a83639b0296a17a8d1af2007643e13c45590fd98826242357d79f0e72d9da.scope - libcontainer container f37a83639b0296a17a8d1af2007643e13c45590fd98826242357d79f0e72d9da. Jan 16 18:03:59.848000 audit: BPF prog-id=234 op=LOAD Jan 16 18:03:59.849000 audit: BPF prog-id=235 op=LOAD Jan 16 18:03:59.849000 audit[4684]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=4672 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:59.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365323666616261613965333836383232303066613461363235393463 Jan 16 18:03:59.850000 audit: BPF prog-id=235 op=UNLOAD Jan 16 18:03:59.850000 audit[4684]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4672 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:59.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365323666616261613965333836383232303066613461363235393463 Jan 16 18:03:59.850000 audit: BPF prog-id=236 op=LOAD Jan 16 18:03:59.850000 audit[4684]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=4672 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:59.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365323666616261613965333836383232303066613461363235393463 Jan 16 18:03:59.850000 audit: BPF prog-id=237 op=LOAD Jan 16 18:03:59.850000 audit[4684]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=4672 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:59.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365323666616261613965333836383232303066613461363235393463 Jan 16 18:03:59.850000 audit: BPF prog-id=237 op=UNLOAD Jan 16 18:03:59.850000 audit[4684]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4672 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:59.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365323666616261613965333836383232303066613461363235393463 Jan 16 18:03:59.850000 audit: BPF prog-id=236 op=UNLOAD Jan 16 18:03:59.850000 audit[4684]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4672 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:59.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365323666616261613965333836383232303066613461363235393463 Jan 16 18:03:59.850000 audit: BPF prog-id=238 op=LOAD Jan 16 18:03:59.850000 audit[4684]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=4672 pid=4684 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:59.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365323666616261613965333836383232303066613461363235393463 Jan 16 18:03:59.852000 audit: BPF prog-id=239 op=LOAD Jan 16 18:03:59.855000 audit: BPF prog-id=240 op=LOAD Jan 16 18:03:59.855000 audit[4711]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4608 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:59.855000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039613362653664613665343139363361333465643434333837333063 Jan 16 18:03:59.855000 audit: BPF prog-id=240 op=UNLOAD Jan 16 18:03:59.855000 audit[4711]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4608 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:59.855000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039613362653664613665343139363361333465643434333837333063 Jan 16 18:03:59.856000 audit: BPF prog-id=241 op=LOAD Jan 16 18:03:59.856000 audit[4711]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4608 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:59.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039613362653664613665343139363361333465643434333837333063 Jan 16 18:03:59.856000 audit: BPF prog-id=242 op=LOAD Jan 16 18:03:59.856000 audit[4711]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4608 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:59.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039613362653664613665343139363361333465643434333837333063 Jan 16 18:03:59.856000 audit: BPF prog-id=242 op=UNLOAD Jan 16 18:03:59.856000 audit[4711]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4608 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:59.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039613362653664613665343139363361333465643434333837333063 Jan 16 18:03:59.856000 audit: BPF prog-id=241 op=UNLOAD Jan 16 18:03:59.856000 audit[4711]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4608 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:59.856000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039613362653664613665343139363361333465643434333837333063 Jan 16 18:03:59.857000 audit: BPF prog-id=243 op=LOAD Jan 16 18:03:59.857000 audit[4711]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4608 pid=4711 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:59.857000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3039613362653664613665343139363361333465643434333837333063 Jan 16 18:03:59.899000 audit: BPF prog-id=244 op=LOAD Jan 16 18:03:59.900982 containerd[1593]: time="2026-01-16T18:03:59.900848131Z" level=info msg="StartContainer for \"09a3be6da6e41963a34ed4438730cc0cf9d548435718263a796ebea192674549\" returns successfully" Jan 16 18:03:59.901000 audit: BPF prog-id=245 op=LOAD Jan 16 18:03:59.901000 audit[4723]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=4704 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:59.901000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633376138333633396230323936613137613864316166323030373634 Jan 16 18:03:59.902000 audit: BPF prog-id=245 op=UNLOAD Jan 16 18:03:59.902000 audit[4723]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4704 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:59.902000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633376138333633396230323936613137613864316166323030373634 Jan 16 18:03:59.903000 audit: BPF prog-id=246 op=LOAD Jan 16 18:03:59.903000 audit[4723]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=4704 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:59.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633376138333633396230323936613137613864316166323030373634 Jan 16 18:03:59.903000 audit: BPF prog-id=247 op=LOAD Jan 16 18:03:59.903000 audit[4723]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=4704 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:59.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633376138333633396230323936613137613864316166323030373634 Jan 16 18:03:59.903000 audit: BPF prog-id=247 op=UNLOAD Jan 16 18:03:59.903000 audit[4723]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4704 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:59.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633376138333633396230323936613137613864316166323030373634 Jan 16 18:03:59.903000 audit: BPF prog-id=246 op=UNLOAD Jan 16 18:03:59.903000 audit[4723]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4704 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:59.903000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633376138333633396230323936613137613864316166323030373634 Jan 16 18:03:59.904000 audit: BPF prog-id=248 op=LOAD Jan 16 18:03:59.904000 audit[4723]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=4704 pid=4723 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:59.904000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633376138333633396230323936613137613864316166323030373634 Jan 16 18:03:59.936053 containerd[1593]: time="2026-01-16T18:03:59.935995980Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-zmqhv,Uid:29a43599-84fe-43df-8ccd-cc9d89aeeb36,Namespace:calico-system,Attempt:0,} returns sandbox id \"3e26fabaa9e38682200fa4a62594c598644e4f90ed95b1d63c402ab16da466b4\"" Jan 16 18:03:59.942547 containerd[1593]: time="2026-01-16T18:03:59.942498580Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 18:03:59.989273 containerd[1593]: time="2026-01-16T18:03:59.989219114Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-x7tcq,Uid:c7bb61dc-6bdf-4bf7-9a33-c67b671e2820,Namespace:calico-system,Attempt:0,} returns sandbox id \"f37a83639b0296a17a8d1af2007643e13c45590fd98826242357d79f0e72d9da\"" Jan 16 18:04:00.098908 containerd[1593]: time="2026-01-16T18:04:00.098818116Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-795d7bd556-2wqnw,Uid:324c3e73-c2dc-4ae9-8875-3415b8305668,Namespace:calico-apiserver,Attempt:0,}" Jan 16 18:04:00.268701 systemd-networkd[1472]: calidd4a8e77e49: Link UP Jan 16 18:04:00.269706 systemd-networkd[1472]: calidd4a8e77e49: Gained carrier Jan 16 18:04:00.286090 containerd[1593]: 2026-01-16 18:04:00.165 [INFO][4793] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4580--0--0--p--f44e0c3b96-k8s-calico--apiserver--795d7bd556--2wqnw-eth0 calico-apiserver-795d7bd556- calico-apiserver 324c3e73-c2dc-4ae9-8875-3415b8305668 856 0 2026-01-16 18:03:27 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:795d7bd556 projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4580-0-0-p-f44e0c3b96 calico-apiserver-795d7bd556-2wqnw eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] calidd4a8e77e49 [] [] }} ContainerID="49a05a5295b3610cf928e4ff9093089a3b8a8d0c01605c0f8ba76fc8d92d42f1" Namespace="calico-apiserver" Pod="calico-apiserver-795d7bd556-2wqnw" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-calico--apiserver--795d7bd556--2wqnw-" Jan 16 18:04:00.286090 containerd[1593]: 2026-01-16 18:04:00.165 [INFO][4793] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="49a05a5295b3610cf928e4ff9093089a3b8a8d0c01605c0f8ba76fc8d92d42f1" Namespace="calico-apiserver" Pod="calico-apiserver-795d7bd556-2wqnw" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-calico--apiserver--795d7bd556--2wqnw-eth0" Jan 16 18:04:00.286090 containerd[1593]: 2026-01-16 18:04:00.200 [INFO][4805] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="49a05a5295b3610cf928e4ff9093089a3b8a8d0c01605c0f8ba76fc8d92d42f1" HandleID="k8s-pod-network.49a05a5295b3610cf928e4ff9093089a3b8a8d0c01605c0f8ba76fc8d92d42f1" Workload="ci--4580--0--0--p--f44e0c3b96-k8s-calico--apiserver--795d7bd556--2wqnw-eth0" Jan 16 18:04:00.286090 containerd[1593]: 2026-01-16 18:04:00.200 [INFO][4805] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="49a05a5295b3610cf928e4ff9093089a3b8a8d0c01605c0f8ba76fc8d92d42f1" HandleID="k8s-pod-network.49a05a5295b3610cf928e4ff9093089a3b8a8d0c01605c0f8ba76fc8d92d42f1" Workload="ci--4580--0--0--p--f44e0c3b96-k8s-calico--apiserver--795d7bd556--2wqnw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c0fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4580-0-0-p-f44e0c3b96", "pod":"calico-apiserver-795d7bd556-2wqnw", "timestamp":"2026-01-16 18:04:00.200513642 +0000 UTC"}, Hostname:"ci-4580-0-0-p-f44e0c3b96", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 18:04:00.286090 containerd[1593]: 2026-01-16 18:04:00.200 [INFO][4805] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 18:04:00.286090 containerd[1593]: 2026-01-16 18:04:00.200 [INFO][4805] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 18:04:00.286090 containerd[1593]: 2026-01-16 18:04:00.200 [INFO][4805] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4580-0-0-p-f44e0c3b96' Jan 16 18:04:00.286090 containerd[1593]: 2026-01-16 18:04:00.218 [INFO][4805] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.49a05a5295b3610cf928e4ff9093089a3b8a8d0c01605c0f8ba76fc8d92d42f1" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:04:00.286090 containerd[1593]: 2026-01-16 18:04:00.227 [INFO][4805] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:04:00.286090 containerd[1593]: 2026-01-16 18:04:00.234 [INFO][4805] ipam/ipam.go 511: Trying affinity for 192.168.28.0/26 host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:04:00.286090 containerd[1593]: 2026-01-16 18:04:00.237 [INFO][4805] ipam/ipam.go 158: Attempting to load block cidr=192.168.28.0/26 host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:04:00.286090 containerd[1593]: 2026-01-16 18:04:00.240 [INFO][4805] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.28.0/26 host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:04:00.286090 containerd[1593]: 2026-01-16 18:04:00.240 [INFO][4805] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.28.0/26 handle="k8s-pod-network.49a05a5295b3610cf928e4ff9093089a3b8a8d0c01605c0f8ba76fc8d92d42f1" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:04:00.286090 containerd[1593]: 2026-01-16 18:04:00.242 [INFO][4805] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.49a05a5295b3610cf928e4ff9093089a3b8a8d0c01605c0f8ba76fc8d92d42f1 Jan 16 18:04:00.286090 containerd[1593]: 2026-01-16 18:04:00.248 [INFO][4805] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.28.0/26 handle="k8s-pod-network.49a05a5295b3610cf928e4ff9093089a3b8a8d0c01605c0f8ba76fc8d92d42f1" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:04:00.286090 containerd[1593]: 2026-01-16 18:04:00.259 [INFO][4805] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.28.8/26] block=192.168.28.0/26 handle="k8s-pod-network.49a05a5295b3610cf928e4ff9093089a3b8a8d0c01605c0f8ba76fc8d92d42f1" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:04:00.286090 containerd[1593]: 2026-01-16 18:04:00.259 [INFO][4805] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.28.8/26] handle="k8s-pod-network.49a05a5295b3610cf928e4ff9093089a3b8a8d0c01605c0f8ba76fc8d92d42f1" host="ci-4580-0-0-p-f44e0c3b96" Jan 16 18:04:00.286090 containerd[1593]: 2026-01-16 18:04:00.259 [INFO][4805] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 18:04:00.286090 containerd[1593]: 2026-01-16 18:04:00.260 [INFO][4805] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.28.8/26] IPv6=[] ContainerID="49a05a5295b3610cf928e4ff9093089a3b8a8d0c01605c0f8ba76fc8d92d42f1" HandleID="k8s-pod-network.49a05a5295b3610cf928e4ff9093089a3b8a8d0c01605c0f8ba76fc8d92d42f1" Workload="ci--4580--0--0--p--f44e0c3b96-k8s-calico--apiserver--795d7bd556--2wqnw-eth0" Jan 16 18:04:00.286998 containerd[1593]: 2026-01-16 18:04:00.264 [INFO][4793] cni-plugin/k8s.go 418: Populated endpoint ContainerID="49a05a5295b3610cf928e4ff9093089a3b8a8d0c01605c0f8ba76fc8d92d42f1" Namespace="calico-apiserver" Pod="calico-apiserver-795d7bd556-2wqnw" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-calico--apiserver--795d7bd556--2wqnw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580--0--0--p--f44e0c3b96-k8s-calico--apiserver--795d7bd556--2wqnw-eth0", GenerateName:"calico-apiserver-795d7bd556-", Namespace:"calico-apiserver", SelfLink:"", UID:"324c3e73-c2dc-4ae9-8875-3415b8305668", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 3, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"795d7bd556", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580-0-0-p-f44e0c3b96", ContainerID:"", Pod:"calico-apiserver-795d7bd556-2wqnw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd4a8e77e49", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:04:00.286998 containerd[1593]: 2026-01-16 18:04:00.264 [INFO][4793] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.28.8/32] ContainerID="49a05a5295b3610cf928e4ff9093089a3b8a8d0c01605c0f8ba76fc8d92d42f1" Namespace="calico-apiserver" Pod="calico-apiserver-795d7bd556-2wqnw" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-calico--apiserver--795d7bd556--2wqnw-eth0" Jan 16 18:04:00.286998 containerd[1593]: 2026-01-16 18:04:00.264 [INFO][4793] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calidd4a8e77e49 ContainerID="49a05a5295b3610cf928e4ff9093089a3b8a8d0c01605c0f8ba76fc8d92d42f1" Namespace="calico-apiserver" Pod="calico-apiserver-795d7bd556-2wqnw" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-calico--apiserver--795d7bd556--2wqnw-eth0" Jan 16 18:04:00.286998 containerd[1593]: 2026-01-16 18:04:00.267 [INFO][4793] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="49a05a5295b3610cf928e4ff9093089a3b8a8d0c01605c0f8ba76fc8d92d42f1" Namespace="calico-apiserver" Pod="calico-apiserver-795d7bd556-2wqnw" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-calico--apiserver--795d7bd556--2wqnw-eth0" Jan 16 18:04:00.286998 containerd[1593]: 2026-01-16 18:04:00.267 [INFO][4793] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="49a05a5295b3610cf928e4ff9093089a3b8a8d0c01605c0f8ba76fc8d92d42f1" Namespace="calico-apiserver" Pod="calico-apiserver-795d7bd556-2wqnw" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-calico--apiserver--795d7bd556--2wqnw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580--0--0--p--f44e0c3b96-k8s-calico--apiserver--795d7bd556--2wqnw-eth0", GenerateName:"calico-apiserver-795d7bd556-", Namespace:"calico-apiserver", SelfLink:"", UID:"324c3e73-c2dc-4ae9-8875-3415b8305668", ResourceVersion:"856", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 3, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"795d7bd556", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580-0-0-p-f44e0c3b96", ContainerID:"49a05a5295b3610cf928e4ff9093089a3b8a8d0c01605c0f8ba76fc8d92d42f1", Pod:"calico-apiserver-795d7bd556-2wqnw", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.28.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"calidd4a8e77e49", MAC:"86:87:6a:53:96:a1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:04:00.286998 containerd[1593]: 2026-01-16 18:04:00.282 [INFO][4793] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="49a05a5295b3610cf928e4ff9093089a3b8a8d0c01605c0f8ba76fc8d92d42f1" Namespace="calico-apiserver" Pod="calico-apiserver-795d7bd556-2wqnw" WorkloadEndpoint="ci--4580--0--0--p--f44e0c3b96-k8s-calico--apiserver--795d7bd556--2wqnw-eth0" Jan 16 18:04:00.317000 audit[4821]: NETFILTER_CFG table=filter:139 family=2 entries=53 op=nft_register_chain pid=4821 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:04:00.317000 audit[4821]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=26608 a0=3 a1=ffffde163500 a2=0 a3=ffff99e87fa8 items=0 ppid=4181 pid=4821 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:00.317000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:04:00.321552 containerd[1593]: time="2026-01-16T18:04:00.321503607Z" level=info msg="connecting to shim 49a05a5295b3610cf928e4ff9093089a3b8a8d0c01605c0f8ba76fc8d92d42f1" address="unix:///run/containerd/s/d2c78c0f74fa3d1cb3c4012bca761de90e60ee5310a004adc21e59d2ce3c353b" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:04:00.356330 systemd[1]: Started cri-containerd-49a05a5295b3610cf928e4ff9093089a3b8a8d0c01605c0f8ba76fc8d92d42f1.scope - libcontainer container 49a05a5295b3610cf928e4ff9093089a3b8a8d0c01605c0f8ba76fc8d92d42f1. Jan 16 18:04:00.372000 audit: BPF prog-id=249 op=LOAD Jan 16 18:04:00.373000 audit: BPF prog-id=250 op=LOAD Jan 16 18:04:00.373000 audit[4842]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4829 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:00.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439613035613532393562333631306366393238653466663930393330 Jan 16 18:04:00.373000 audit: BPF prog-id=250 op=UNLOAD Jan 16 18:04:00.373000 audit[4842]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4829 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:00.373000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439613035613532393562333631306366393238653466663930393330 Jan 16 18:04:00.374000 audit: BPF prog-id=251 op=LOAD Jan 16 18:04:00.374000 audit[4842]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4829 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:00.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439613035613532393562333631306366393238653466663930393330 Jan 16 18:04:00.374000 audit: BPF prog-id=252 op=LOAD Jan 16 18:04:00.374000 audit[4842]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4829 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:00.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439613035613532393562333631306366393238653466663930393330 Jan 16 18:04:00.374000 audit: BPF prog-id=252 op=UNLOAD Jan 16 18:04:00.374000 audit[4842]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4829 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:00.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439613035613532393562333631306366393238653466663930393330 Jan 16 18:04:00.374000 audit: BPF prog-id=251 op=UNLOAD Jan 16 18:04:00.374000 audit[4842]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4829 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:00.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439613035613532393562333631306366393238653466663930393330 Jan 16 18:04:00.374000 audit: BPF prog-id=253 op=LOAD Jan 16 18:04:00.374000 audit[4842]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4829 pid=4842 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:00.374000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3439613035613532393562333631306366393238653466663930393330 Jan 16 18:04:00.412114 containerd[1593]: time="2026-01-16T18:04:00.411997811Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-795d7bd556-2wqnw,Uid:324c3e73-c2dc-4ae9-8875-3415b8305668,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"49a05a5295b3610cf928e4ff9093089a3b8a8d0c01605c0f8ba76fc8d92d42f1\"" Jan 16 18:04:00.419967 kubelet[2847]: I0116 18:04:00.419240 2847 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-pr96z" podStartSLOduration=46.419220299 podStartE2EDuration="46.419220299s" podCreationTimestamp="2026-01-16 18:03:14 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 18:04:00.418873912 +0000 UTC m=+51.450794203" watchObservedRunningTime="2026-01-16 18:04:00.419220299 +0000 UTC m=+51.451140630" Jan 16 18:04:00.463000 audit[4870]: NETFILTER_CFG table=filter:140 family=2 entries=14 op=nft_register_rule pid=4870 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:04:00.463000 audit[4870]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff4d91ef0 a2=0 a3=1 items=0 ppid=2953 pid=4870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:00.463000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:04:00.469000 audit[4870]: NETFILTER_CFG table=nat:141 family=2 entries=44 op=nft_register_rule pid=4870 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:04:00.469000 audit[4870]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=fffff4d91ef0 a2=0 a3=1 items=0 ppid=2953 pid=4870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:00.469000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:04:00.486000 audit[4872]: NETFILTER_CFG table=filter:142 family=2 entries=14 op=nft_register_rule pid=4872 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:04:00.486000 audit[4872]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff2d7bd70 a2=0 a3=1 items=0 ppid=2953 pid=4872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:00.486000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:04:00.502000 audit[4872]: NETFILTER_CFG table=nat:143 family=2 entries=56 op=nft_register_chain pid=4872 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:04:00.502000 audit[4872]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=fffff2d7bd70 a2=0 a3=1 items=0 ppid=2953 pid=4872 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:00.502000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:04:00.520213 containerd[1593]: time="2026-01-16T18:04:00.520031596Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:04:00.523252 containerd[1593]: time="2026-01-16T18:04:00.521911304Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 18:04:00.523252 containerd[1593]: time="2026-01-16T18:04:00.521969468Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 16 18:04:00.525990 kubelet[2847]: E0116 18:04:00.525526 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 18:04:00.525990 kubelet[2847]: E0116 18:04:00.525575 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 18:04:00.525990 kubelet[2847]: E0116 18:04:00.525795 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7kdrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-zmqhv_calico-system(29a43599-84fe-43df-8ccd-cc9d89aeeb36): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 18:04:00.526697 containerd[1593]: time="2026-01-16T18:04:00.526432780Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 18:04:00.527296 kubelet[2847]: E0116 18:04:00.527202 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zmqhv" podUID="29a43599-84fe-43df-8ccd-cc9d89aeeb36" Jan 16 18:04:00.790475 systemd-networkd[1472]: calie27caa42560: Gained IPv6LL Jan 16 18:04:00.888371 containerd[1593]: time="2026-01-16T18:04:00.888317789Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:04:00.890512 containerd[1593]: time="2026-01-16T18:04:00.890400593Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 18:04:00.890512 containerd[1593]: time="2026-01-16T18:04:00.890453077Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 16 18:04:00.890824 kubelet[2847]: E0116 18:04:00.890751 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 18:04:00.890878 kubelet[2847]: E0116 18:04:00.890813 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 18:04:00.891679 kubelet[2847]: E0116 18:04:00.891592 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bx5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x7tcq_calico-system(c7bb61dc-6bdf-4bf7-9a33-c67b671e2820): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 18:04:00.893296 containerd[1593]: time="2026-01-16T18:04:00.893251377Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 18:04:01.048319 systemd-networkd[1472]: calic50072d36ac: Gained IPv6LL Jan 16 18:04:01.049082 systemd-networkd[1472]: cali8015456143a: Gained IPv6LL Jan 16 18:04:01.236559 containerd[1593]: time="2026-01-16T18:04:01.236405406Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:04:01.238041 containerd[1593]: time="2026-01-16T18:04:01.237986129Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 18:04:01.238344 containerd[1593]: time="2026-01-16T18:04:01.238097977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 18:04:01.239211 kubelet[2847]: E0116 18:04:01.239100 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:04:01.239211 kubelet[2847]: E0116 18:04:01.239160 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:04:01.239753 kubelet[2847]: E0116 18:04:01.239455 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tt8wg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-795d7bd556-2wqnw_calico-apiserver(324c3e73-c2dc-4ae9-8875-3415b8305668): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 18:04:01.239912 containerd[1593]: time="2026-01-16T18:04:01.239632296Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 18:04:01.241726 kubelet[2847]: E0116 18:04:01.241304 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-2wqnw" podUID="324c3e73-c2dc-4ae9-8875-3415b8305668" Jan 16 18:04:01.406609 kubelet[2847]: E0116 18:04:01.406282 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zmqhv" podUID="29a43599-84fe-43df-8ccd-cc9d89aeeb36" Jan 16 18:04:01.409268 kubelet[2847]: E0116 18:04:01.409225 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-2wqnw" podUID="324c3e73-c2dc-4ae9-8875-3415b8305668" Jan 16 18:04:01.527000 audit[4881]: NETFILTER_CFG table=filter:144 family=2 entries=14 op=nft_register_rule pid=4881 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:04:01.529788 kernel: kauditd_printk_skb: 437 callbacks suppressed Jan 16 18:04:01.529902 kernel: audit: type=1325 audit(1768586641.527:733): table=filter:144 family=2 entries=14 op=nft_register_rule pid=4881 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:04:01.527000 audit[4881]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe2c20250 a2=0 a3=1 items=0 ppid=2953 pid=4881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:01.534331 kernel: audit: type=1300 audit(1768586641.527:733): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe2c20250 a2=0 a3=1 items=0 ppid=2953 pid=4881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:01.527000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:04:01.535994 kernel: audit: type=1327 audit(1768586641.527:733): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:04:01.534000 audit[4881]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=4881 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:04:01.534000 audit[4881]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffe2c20250 a2=0 a3=1 items=0 ppid=2953 pid=4881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:01.539929 kernel: audit: type=1325 audit(1768586641.534:734): table=nat:145 family=2 entries=20 op=nft_register_rule pid=4881 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:04:01.540040 kernel: audit: type=1300 audit(1768586641.534:734): arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffe2c20250 a2=0 a3=1 items=0 ppid=2953 pid=4881 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:01.534000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:04:01.541665 kernel: audit: type=1327 audit(1768586641.534:734): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:04:01.584892 containerd[1593]: time="2026-01-16T18:04:01.584799600Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:04:01.588095 containerd[1593]: time="2026-01-16T18:04:01.588018490Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 18:04:01.588219 containerd[1593]: time="2026-01-16T18:04:01.588052652Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 16 18:04:01.588462 kubelet[2847]: E0116 18:04:01.588383 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 18:04:01.588880 kubelet[2847]: E0116 18:04:01.588488 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 18:04:01.588880 kubelet[2847]: E0116 18:04:01.588635 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bx5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x7tcq_calico-system(c7bb61dc-6bdf-4bf7-9a33-c67b671e2820): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 18:04:01.590825 kubelet[2847]: E0116 18:04:01.589968 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x7tcq" podUID="c7bb61dc-6bdf-4bf7-9a33-c67b671e2820" Jan 16 18:04:02.005372 systemd-networkd[1472]: calidd4a8e77e49: Gained IPv6LL Jan 16 18:04:02.412228 kubelet[2847]: E0116 18:04:02.412074 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x7tcq" podUID="c7bb61dc-6bdf-4bf7-9a33-c67b671e2820" Jan 16 18:04:02.412228 kubelet[2847]: E0116 18:04:02.412185 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-2wqnw" podUID="324c3e73-c2dc-4ae9-8875-3415b8305668" Jan 16 18:04:05.107568 containerd[1593]: time="2026-01-16T18:04:05.106048782Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 18:04:05.472310 containerd[1593]: time="2026-01-16T18:04:05.472172862Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:04:05.473724 containerd[1593]: time="2026-01-16T18:04:05.473590967Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 18:04:05.474104 containerd[1593]: time="2026-01-16T18:04:05.473676173Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 16 18:04:05.474301 kubelet[2847]: E0116 18:04:05.474009 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 18:04:05.474301 kubelet[2847]: E0116 18:04:05.474066 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 18:04:05.477847 kubelet[2847]: E0116 18:04:05.477089 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9acb3dbfadba421ba67d8f0d97111c75,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-27htv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-767775565b-z2w97_calico-system(76ab8953-d1f7-498a-b509-62959852b74e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 18:04:05.481956 containerd[1593]: time="2026-01-16T18:04:05.481885138Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 18:04:05.828938 containerd[1593]: time="2026-01-16T18:04:05.828711315Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:04:05.831257 containerd[1593]: time="2026-01-16T18:04:05.831130413Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 18:04:05.831257 containerd[1593]: time="2026-01-16T18:04:05.831162295Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 16 18:04:05.832028 kubelet[2847]: E0116 18:04:05.831512 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 18:04:05.832028 kubelet[2847]: E0116 18:04:05.831573 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 18:04:05.832028 kubelet[2847]: E0116 18:04:05.831724 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-27htv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-767775565b-z2w97_calico-system(76ab8953-d1f7-498a-b509-62959852b74e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 18:04:05.833614 kubelet[2847]: E0116 18:04:05.833525 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-767775565b-z2w97" podUID="76ab8953-d1f7-498a-b509-62959852b74e" Jan 16 18:04:11.099577 containerd[1593]: time="2026-01-16T18:04:11.099200012Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 18:04:11.456909 containerd[1593]: time="2026-01-16T18:04:11.456775400Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:04:11.459911 containerd[1593]: time="2026-01-16T18:04:11.459814651Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 18:04:11.460091 containerd[1593]: time="2026-01-16T18:04:11.459875375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 16 18:04:11.460281 kubelet[2847]: E0116 18:04:11.460227 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 18:04:11.460596 kubelet[2847]: E0116 18:04:11.460293 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 18:04:11.461239 kubelet[2847]: E0116 18:04:11.461062 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ft29j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7995d6d9cc-4t5v5_calico-system(803aa494-41ae-4e2d-9b86-fe41697d291d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 18:04:11.462704 kubelet[2847]: E0116 18:04:11.462640 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7995d6d9cc-4t5v5" podUID="803aa494-41ae-4e2d-9b86-fe41697d291d" Jan 16 18:04:12.101476 containerd[1593]: time="2026-01-16T18:04:12.100259937Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 18:04:12.456290 containerd[1593]: time="2026-01-16T18:04:12.456014641Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:04:12.458434 containerd[1593]: time="2026-01-16T18:04:12.457644474Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 18:04:12.458434 containerd[1593]: time="2026-01-16T18:04:12.457796124Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 18:04:12.459274 kubelet[2847]: E0116 18:04:12.458096 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:04:12.459274 kubelet[2847]: E0116 18:04:12.458353 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:04:12.459274 kubelet[2847]: E0116 18:04:12.458780 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5x7th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-795d7bd556-lxhzl_calico-apiserver(acf5809a-b3e8-41d0-86b6-edf701abdda5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 18:04:12.461075 kubelet[2847]: E0116 18:04:12.460466 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-lxhzl" podUID="acf5809a-b3e8-41d0-86b6-edf701abdda5" Jan 16 18:04:14.100998 containerd[1593]: time="2026-01-16T18:04:14.100909022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 18:04:14.452234 containerd[1593]: time="2026-01-16T18:04:14.452173539Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:04:14.453883 containerd[1593]: time="2026-01-16T18:04:14.453836579Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 18:04:14.454005 containerd[1593]: time="2026-01-16T18:04:14.453929255Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 16 18:04:14.454155 kubelet[2847]: E0116 18:04:14.454117 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 18:04:14.454546 kubelet[2847]: E0116 18:04:14.454169 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 18:04:14.454546 kubelet[2847]: E0116 18:04:14.454372 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7kdrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-zmqhv_calico-system(29a43599-84fe-43df-8ccd-cc9d89aeeb36): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 18:04:14.455983 kubelet[2847]: E0116 18:04:14.455896 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zmqhv" podUID="29a43599-84fe-43df-8ccd-cc9d89aeeb36" Jan 16 18:04:15.101005 containerd[1593]: time="2026-01-16T18:04:15.100518991Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 18:04:15.436111 containerd[1593]: time="2026-01-16T18:04:15.435742089Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:04:15.442294 containerd[1593]: time="2026-01-16T18:04:15.442040366Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 18:04:15.442294 containerd[1593]: time="2026-01-16T18:04:15.442165800Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 18:04:15.442799 kubelet[2847]: E0116 18:04:15.442695 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:04:15.442799 kubelet[2847]: E0116 18:04:15.442761 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:04:15.443091 kubelet[2847]: E0116 18:04:15.442923 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tt8wg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-795d7bd556-2wqnw_calico-apiserver(324c3e73-c2dc-4ae9-8875-3415b8305668): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 18:04:15.444431 kubelet[2847]: E0116 18:04:15.444355 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-2wqnw" podUID="324c3e73-c2dc-4ae9-8875-3415b8305668" Jan 16 18:04:17.106223 containerd[1593]: time="2026-01-16T18:04:17.106113643Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 18:04:17.473239 containerd[1593]: time="2026-01-16T18:04:17.473002550Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:04:17.474696 containerd[1593]: time="2026-01-16T18:04:17.474564729Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 18:04:17.474696 containerd[1593]: time="2026-01-16T18:04:17.474630046Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 16 18:04:17.475287 kubelet[2847]: E0116 18:04:17.474817 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 18:04:17.475287 kubelet[2847]: E0116 18:04:17.474873 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 18:04:17.475287 kubelet[2847]: E0116 18:04:17.475045 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bx5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x7tcq_calico-system(c7bb61dc-6bdf-4bf7-9a33-c67b671e2820): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 18:04:17.478926 containerd[1593]: time="2026-01-16T18:04:17.478685007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 18:04:17.807112 containerd[1593]: time="2026-01-16T18:04:17.806905389Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:04:17.809232 containerd[1593]: time="2026-01-16T18:04:17.809022226Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 18:04:17.809232 containerd[1593]: time="2026-01-16T18:04:17.809044905Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 16 18:04:17.809662 kubelet[2847]: E0116 18:04:17.809616 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 18:04:17.809984 kubelet[2847]: E0116 18:04:17.809760 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 18:04:17.809984 kubelet[2847]: E0116 18:04:17.809914 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bx5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x7tcq_calico-system(c7bb61dc-6bdf-4bf7-9a33-c67b671e2820): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 18:04:17.811732 kubelet[2847]: E0116 18:04:17.811661 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x7tcq" podUID="c7bb61dc-6bdf-4bf7-9a33-c67b671e2820" Jan 16 18:04:20.105292 kubelet[2847]: E0116 18:04:20.105195 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-767775565b-z2w97" podUID="76ab8953-d1f7-498a-b509-62959852b74e" Jan 16 18:04:23.102336 kubelet[2847]: E0116 18:04:23.102221 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7995d6d9cc-4t5v5" podUID="803aa494-41ae-4e2d-9b86-fe41697d291d" Jan 16 18:04:23.110106 kubelet[2847]: E0116 18:04:23.110031 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-lxhzl" podUID="acf5809a-b3e8-41d0-86b6-edf701abdda5" Jan 16 18:04:25.101193 kubelet[2847]: E0116 18:04:25.100983 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zmqhv" podUID="29a43599-84fe-43df-8ccd-cc9d89aeeb36" Jan 16 18:04:27.440885 systemd[1]: Started sshd@9-188.245.199.112:22-210.79.142.221:37594.service - OpenSSH per-connection server daemon (210.79.142.221:37594). Jan 16 18:04:27.440000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-188.245.199.112:22-210.79.142.221:37594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:27.445981 kernel: audit: type=1130 audit(1768586667.440:735): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-188.245.199.112:22-210.79.142.221:37594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:28.103394 kubelet[2847]: E0116 18:04:28.103292 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x7tcq" podUID="c7bb61dc-6bdf-4bf7-9a33-c67b671e2820" Jan 16 18:04:28.499458 sshd[4933]: Invalid user myuser from 210.79.142.221 port 37594 Jan 16 18:04:28.728713 sshd[4933]: Received disconnect from 210.79.142.221 port 37594:11: Bye Bye [preauth] Jan 16 18:04:28.728713 sshd[4933]: Disconnected from invalid user myuser 210.79.142.221 port 37594 [preauth] Jan 16 18:04:28.729000 audit[4933]: USER_ERR pid=4933 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=210.79.142.221 addr=210.79.142.221 terminal=ssh res=failed' Jan 16 18:04:28.732983 kernel: audit: type=1109 audit(1768586668.729:736): pid=4933 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=210.79.142.221 addr=210.79.142.221 terminal=ssh res=failed' Jan 16 18:04:28.734344 systemd[1]: sshd@9-188.245.199.112:22-210.79.142.221:37594.service: Deactivated successfully. Jan 16 18:04:28.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-188.245.199.112:22-210.79.142.221:37594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:28.743021 kernel: audit: type=1131 audit(1768586668.735:737): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-188.245.199.112:22-210.79.142.221:37594 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:29.106587 kubelet[2847]: E0116 18:04:29.106532 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-2wqnw" podUID="324c3e73-c2dc-4ae9-8875-3415b8305668" Jan 16 18:04:31.099097 containerd[1593]: time="2026-01-16T18:04:31.098889007Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 18:04:31.445249 containerd[1593]: time="2026-01-16T18:04:31.444885271Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:04:31.446552 containerd[1593]: time="2026-01-16T18:04:31.446490818Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 18:04:31.446672 containerd[1593]: time="2026-01-16T18:04:31.446596578Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 16 18:04:31.447160 kubelet[2847]: E0116 18:04:31.447083 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 18:04:31.447470 kubelet[2847]: E0116 18:04:31.447195 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 18:04:31.447470 kubelet[2847]: E0116 18:04:31.447435 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9acb3dbfadba421ba67d8f0d97111c75,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-27htv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-767775565b-z2w97_calico-system(76ab8953-d1f7-498a-b509-62959852b74e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 18:04:31.450276 containerd[1593]: time="2026-01-16T18:04:31.450239270Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 18:04:31.799526 containerd[1593]: time="2026-01-16T18:04:31.799471629Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:04:31.801219 containerd[1593]: time="2026-01-16T18:04:31.801115576Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 18:04:31.801219 containerd[1593]: time="2026-01-16T18:04:31.801181856Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 16 18:04:31.801731 kubelet[2847]: E0116 18:04:31.801496 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 18:04:31.801731 kubelet[2847]: E0116 18:04:31.801545 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 18:04:31.801731 kubelet[2847]: E0116 18:04:31.801673 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-27htv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-767775565b-z2w97_calico-system(76ab8953-d1f7-498a-b509-62959852b74e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 18:04:31.803469 kubelet[2847]: E0116 18:04:31.803402 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-767775565b-z2w97" podUID="76ab8953-d1f7-498a-b509-62959852b74e" Jan 16 18:04:37.102792 containerd[1593]: time="2026-01-16T18:04:37.102499381Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 18:04:37.434529 containerd[1593]: time="2026-01-16T18:04:37.433901873Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:04:37.436424 containerd[1593]: time="2026-01-16T18:04:37.436291518Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 18:04:37.436424 containerd[1593]: time="2026-01-16T18:04:37.436356718Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 16 18:04:37.437178 kubelet[2847]: E0116 18:04:37.437122 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 18:04:37.437539 kubelet[2847]: E0116 18:04:37.437193 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 18:04:37.437539 kubelet[2847]: E0116 18:04:37.437481 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7kdrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-zmqhv_calico-system(29a43599-84fe-43df-8ccd-cc9d89aeeb36): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 18:04:37.438164 containerd[1593]: time="2026-01-16T18:04:37.437905601Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 18:04:37.439117 kubelet[2847]: E0116 18:04:37.439071 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zmqhv" podUID="29a43599-84fe-43df-8ccd-cc9d89aeeb36" Jan 16 18:04:37.821095 containerd[1593]: time="2026-01-16T18:04:37.821015360Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:04:37.822586 containerd[1593]: time="2026-01-16T18:04:37.822516523Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 18:04:37.823102 containerd[1593]: time="2026-01-16T18:04:37.822544523Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 16 18:04:37.823258 kubelet[2847]: E0116 18:04:37.822915 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 18:04:37.823258 kubelet[2847]: E0116 18:04:37.823011 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 18:04:37.823483 kubelet[2847]: E0116 18:04:37.823211 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ft29j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7995d6d9cc-4t5v5_calico-system(803aa494-41ae-4e2d-9b86-fe41697d291d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 18:04:37.825166 kubelet[2847]: E0116 18:04:37.825096 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7995d6d9cc-4t5v5" podUID="803aa494-41ae-4e2d-9b86-fe41697d291d" Jan 16 18:04:38.100731 containerd[1593]: time="2026-01-16T18:04:38.100461728Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 18:04:38.453561 containerd[1593]: time="2026-01-16T18:04:38.453503337Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:04:38.455030 containerd[1593]: time="2026-01-16T18:04:38.454930462Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 18:04:38.455261 containerd[1593]: time="2026-01-16T18:04:38.455128343Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 18:04:38.455483 kubelet[2847]: E0116 18:04:38.455429 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:04:38.455483 kubelet[2847]: E0116 18:04:38.455478 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:04:38.455912 kubelet[2847]: E0116 18:04:38.455618 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5x7th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-795d7bd556-lxhzl_calico-apiserver(acf5809a-b3e8-41d0-86b6-edf701abdda5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 18:04:38.456929 kubelet[2847]: E0116 18:04:38.456889 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-lxhzl" podUID="acf5809a-b3e8-41d0-86b6-edf701abdda5" Jan 16 18:04:43.101975 containerd[1593]: time="2026-01-16T18:04:43.100751136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 18:04:43.451036 containerd[1593]: time="2026-01-16T18:04:43.450360246Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:04:43.453005 containerd[1593]: time="2026-01-16T18:04:43.452937272Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 18:04:43.453297 containerd[1593]: time="2026-01-16T18:04:43.453171635Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 16 18:04:43.454440 kubelet[2847]: E0116 18:04:43.453640 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 18:04:43.454440 kubelet[2847]: E0116 18:04:43.453691 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 18:04:43.454440 kubelet[2847]: E0116 18:04:43.453820 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bx5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x7tcq_calico-system(c7bb61dc-6bdf-4bf7-9a33-c67b671e2820): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 18:04:43.457752 containerd[1593]: time="2026-01-16T18:04:43.457706641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 18:04:43.796040 containerd[1593]: time="2026-01-16T18:04:43.795908155Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:04:43.798003 containerd[1593]: time="2026-01-16T18:04:43.797846615Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 18:04:43.798003 containerd[1593]: time="2026-01-16T18:04:43.797884255Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 16 18:04:43.798442 kubelet[2847]: E0116 18:04:43.798367 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 18:04:43.798512 kubelet[2847]: E0116 18:04:43.798449 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 18:04:43.798663 kubelet[2847]: E0116 18:04:43.798600 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bx5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x7tcq_calico-system(c7bb61dc-6bdf-4bf7-9a33-c67b671e2820): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 18:04:43.800204 kubelet[2847]: E0116 18:04:43.800159 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x7tcq" podUID="c7bb61dc-6bdf-4bf7-9a33-c67b671e2820" Jan 16 18:04:44.101192 kubelet[2847]: E0116 18:04:44.100975 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-767775565b-z2w97" podUID="76ab8953-d1f7-498a-b509-62959852b74e" Jan 16 18:04:44.105634 containerd[1593]: time="2026-01-16T18:04:44.105513624Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 18:04:44.445313 containerd[1593]: time="2026-01-16T18:04:44.445170482Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:04:44.447094 containerd[1593]: time="2026-01-16T18:04:44.446765740Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 18:04:44.447433 containerd[1593]: time="2026-01-16T18:04:44.446793020Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 18:04:44.447674 kubelet[2847]: E0116 18:04:44.447634 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:04:44.447743 kubelet[2847]: E0116 18:04:44.447687 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:04:44.447897 kubelet[2847]: E0116 18:04:44.447821 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tt8wg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-795d7bd556-2wqnw_calico-apiserver(324c3e73-c2dc-4ae9-8875-3415b8305668): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 18:04:44.449031 kubelet[2847]: E0116 18:04:44.448972 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-2wqnw" podUID="324c3e73-c2dc-4ae9-8875-3415b8305668" Jan 16 18:04:49.101194 kubelet[2847]: E0116 18:04:49.100195 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7995d6d9cc-4t5v5" podUID="803aa494-41ae-4e2d-9b86-fe41697d291d" Jan 16 18:04:50.099246 kubelet[2847]: E0116 18:04:50.099105 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-lxhzl" podUID="acf5809a-b3e8-41d0-86b6-edf701abdda5" Jan 16 18:04:52.100102 kubelet[2847]: E0116 18:04:52.100044 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zmqhv" podUID="29a43599-84fe-43df-8ccd-cc9d89aeeb36" Jan 16 18:04:56.100095 kubelet[2847]: E0116 18:04:56.099985 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x7tcq" podUID="c7bb61dc-6bdf-4bf7-9a33-c67b671e2820" Jan 16 18:04:58.102048 kubelet[2847]: E0116 18:04:58.101993 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-2wqnw" podUID="324c3e73-c2dc-4ae9-8875-3415b8305668" Jan 16 18:04:58.104666 kubelet[2847]: E0116 18:04:58.104615 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-767775565b-z2w97" podUID="76ab8953-d1f7-498a-b509-62959852b74e" Jan 16 18:04:59.776301 systemd[1]: Started sshd@10-188.245.199.112:22-80.94.95.116:56746.service - OpenSSH per-connection server daemon (80.94.95.116:56746). Jan 16 18:04:59.775000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-188.245.199.112:22-80.94.95.116:56746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:59.779970 kernel: audit: type=1130 audit(1768586699.775:738): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-188.245.199.112:22-80.94.95.116:56746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:05:01.103973 kubelet[2847]: E0116 18:05:01.102738 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7995d6d9cc-4t5v5" podUID="803aa494-41ae-4e2d-9b86-fe41697d291d" Jan 16 18:05:01.276701 sshd[4979]: Invalid user admin from 80.94.95.116 port 56746 Jan 16 18:05:01.364107 sshd[4979]: Connection closed by invalid user admin 80.94.95.116 port 56746 [preauth] Jan 16 18:05:01.363000 audit[4979]: USER_ERR pid=4979 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=80.94.95.116 addr=80.94.95.116 terminal=ssh res=failed' Jan 16 18:05:01.367490 systemd[1]: sshd@10-188.245.199.112:22-80.94.95.116:56746.service: Deactivated successfully. Jan 16 18:05:01.368000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-188.245.199.112:22-80.94.95.116:56746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:05:01.371728 kernel: audit: type=1109 audit(1768586701.363:739): pid=4979 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=80.94.95.116 addr=80.94.95.116 terminal=ssh res=failed' Jan 16 18:05:01.371848 kernel: audit: type=1131 audit(1768586701.368:740): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-188.245.199.112:22-80.94.95.116:56746 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:05:05.099830 kubelet[2847]: E0116 18:05:05.099393 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-lxhzl" podUID="acf5809a-b3e8-41d0-86b6-edf701abdda5" Jan 16 18:05:06.100486 kubelet[2847]: E0116 18:05:06.100155 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zmqhv" podUID="29a43599-84fe-43df-8ccd-cc9d89aeeb36" Jan 16 18:05:09.103151 kubelet[2847]: E0116 18:05:09.103093 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x7tcq" podUID="c7bb61dc-6bdf-4bf7-9a33-c67b671e2820" Jan 16 18:05:10.099980 kubelet[2847]: E0116 18:05:10.099068 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-2wqnw" podUID="324c3e73-c2dc-4ae9-8875-3415b8305668" Jan 16 18:05:11.102281 kubelet[2847]: E0116 18:05:11.102223 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-767775565b-z2w97" podUID="76ab8953-d1f7-498a-b509-62959852b74e" Jan 16 18:05:14.098700 kubelet[2847]: E0116 18:05:14.098616 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7995d6d9cc-4t5v5" podUID="803aa494-41ae-4e2d-9b86-fe41697d291d" Jan 16 18:05:16.099502 kubelet[2847]: E0116 18:05:16.099332 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-lxhzl" podUID="acf5809a-b3e8-41d0-86b6-edf701abdda5" Jan 16 18:05:17.101279 kubelet[2847]: E0116 18:05:17.100852 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zmqhv" podUID="29a43599-84fe-43df-8ccd-cc9d89aeeb36" Jan 16 18:05:22.103568 kubelet[2847]: E0116 18:05:22.103281 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x7tcq" podUID="c7bb61dc-6bdf-4bf7-9a33-c67b671e2820" Jan 16 18:05:24.099206 kubelet[2847]: E0116 18:05:24.098475 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-2wqnw" podUID="324c3e73-c2dc-4ae9-8875-3415b8305668" Jan 16 18:05:25.102081 containerd[1593]: time="2026-01-16T18:05:25.101158348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 18:05:25.463486 containerd[1593]: time="2026-01-16T18:05:25.463282679Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:05:25.464756 containerd[1593]: time="2026-01-16T18:05:25.464708814Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 18:05:25.465053 containerd[1593]: time="2026-01-16T18:05:25.464757015Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 16 18:05:25.465670 kubelet[2847]: E0116 18:05:25.465636 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 18:05:25.468119 kubelet[2847]: E0116 18:05:25.466025 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 18:05:25.468265 kubelet[2847]: E0116 18:05:25.468225 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9acb3dbfadba421ba67d8f0d97111c75,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-27htv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-767775565b-z2w97_calico-system(76ab8953-d1f7-498a-b509-62959852b74e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 18:05:25.472274 containerd[1593]: time="2026-01-16T18:05:25.472227223Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 18:05:25.819055 containerd[1593]: time="2026-01-16T18:05:25.817910521Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:05:25.820544 containerd[1593]: time="2026-01-16T18:05:25.820427138Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 18:05:25.820671 containerd[1593]: time="2026-01-16T18:05:25.820508101Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 16 18:05:25.820749 kubelet[2847]: E0116 18:05:25.820698 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 18:05:25.821396 kubelet[2847]: E0116 18:05:25.820754 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 18:05:25.821396 kubelet[2847]: E0116 18:05:25.820906 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-27htv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-767775565b-z2w97_calico-system(76ab8953-d1f7-498a-b509-62959852b74e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 18:05:25.822745 kubelet[2847]: E0116 18:05:25.822704 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-767775565b-z2w97" podUID="76ab8953-d1f7-498a-b509-62959852b74e" Jan 16 18:05:26.100062 containerd[1593]: time="2026-01-16T18:05:26.098846359Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 18:05:26.450407 containerd[1593]: time="2026-01-16T18:05:26.450033020Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:05:26.452143 containerd[1593]: time="2026-01-16T18:05:26.451938774Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 18:05:26.452402 containerd[1593]: time="2026-01-16T18:05:26.451999337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 16 18:05:26.452643 kubelet[2847]: E0116 18:05:26.452591 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 18:05:26.452707 kubelet[2847]: E0116 18:05:26.452648 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 18:05:26.452928 kubelet[2847]: E0116 18:05:26.452773 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ft29j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7995d6d9cc-4t5v5_calico-system(803aa494-41ae-4e2d-9b86-fe41697d291d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 18:05:26.454312 kubelet[2847]: E0116 18:05:26.454257 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7995d6d9cc-4t5v5" podUID="803aa494-41ae-4e2d-9b86-fe41697d291d" Jan 16 18:05:29.110206 containerd[1593]: time="2026-01-16T18:05:29.109896353Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 18:05:29.459803 containerd[1593]: time="2026-01-16T18:05:29.459528825Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:05:29.463479 containerd[1593]: time="2026-01-16T18:05:29.463418980Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 18:05:29.463965 containerd[1593]: time="2026-01-16T18:05:29.463581906Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 16 18:05:29.464144 kubelet[2847]: E0116 18:05:29.464088 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 18:05:29.464900 kubelet[2847]: E0116 18:05:29.464204 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 18:05:29.464900 kubelet[2847]: E0116 18:05:29.464583 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7kdrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-zmqhv_calico-system(29a43599-84fe-43df-8ccd-cc9d89aeeb36): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 18:05:29.466234 kubelet[2847]: E0116 18:05:29.466187 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zmqhv" podUID="29a43599-84fe-43df-8ccd-cc9d89aeeb36" Jan 16 18:05:30.099636 containerd[1593]: time="2026-01-16T18:05:30.099387681Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 18:05:30.466204 containerd[1593]: time="2026-01-16T18:05:30.466146095Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:05:30.467819 containerd[1593]: time="2026-01-16T18:05:30.467772360Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 18:05:30.467931 containerd[1593]: time="2026-01-16T18:05:30.467868524Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 18:05:30.468233 kubelet[2847]: E0116 18:05:30.468084 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:05:30.468233 kubelet[2847]: E0116 18:05:30.468144 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:05:30.469042 kubelet[2847]: E0116 18:05:30.468636 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5x7th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-795d7bd556-lxhzl_calico-apiserver(acf5809a-b3e8-41d0-86b6-edf701abdda5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 18:05:30.470229 kubelet[2847]: E0116 18:05:30.470169 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-lxhzl" podUID="acf5809a-b3e8-41d0-86b6-edf701abdda5" Jan 16 18:05:34.100465 containerd[1593]: time="2026-01-16T18:05:34.100161148Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 18:05:34.441987 containerd[1593]: time="2026-01-16T18:05:34.441350492Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:05:34.444422 containerd[1593]: time="2026-01-16T18:05:34.444103045Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 18:05:34.444422 containerd[1593]: time="2026-01-16T18:05:34.444345375Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 16 18:05:34.445005 kubelet[2847]: E0116 18:05:34.444805 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 18:05:34.445005 kubelet[2847]: E0116 18:05:34.444880 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 18:05:34.445791 kubelet[2847]: E0116 18:05:34.445506 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bx5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x7tcq_calico-system(c7bb61dc-6bdf-4bf7-9a33-c67b671e2820): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 18:05:34.449140 containerd[1593]: time="2026-01-16T18:05:34.448770396Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 18:05:34.799200 containerd[1593]: time="2026-01-16T18:05:34.799130157Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:05:34.801226 containerd[1593]: time="2026-01-16T18:05:34.801080076Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 18:05:34.801550 containerd[1593]: time="2026-01-16T18:05:34.801174240Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 16 18:05:34.801912 kubelet[2847]: E0116 18:05:34.801831 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 18:05:34.801912 kubelet[2847]: E0116 18:05:34.801885 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 18:05:34.802553 kubelet[2847]: E0116 18:05:34.802494 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bx5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x7tcq_calico-system(c7bb61dc-6bdf-4bf7-9a33-c67b671e2820): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 18:05:34.804086 kubelet[2847]: E0116 18:05:34.804025 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x7tcq" podUID="c7bb61dc-6bdf-4bf7-9a33-c67b671e2820" Jan 16 18:05:36.099956 containerd[1593]: time="2026-01-16T18:05:36.099860515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 18:05:36.455444 containerd[1593]: time="2026-01-16T18:05:36.455378213Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:05:36.457125 containerd[1593]: time="2026-01-16T18:05:36.457045002Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 18:05:36.457284 containerd[1593]: time="2026-01-16T18:05:36.457191728Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 18:05:36.457644 kubelet[2847]: E0116 18:05:36.457565 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:05:36.458484 kubelet[2847]: E0116 18:05:36.458155 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:05:36.458700 kubelet[2847]: E0116 18:05:36.458652 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tt8wg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-795d7bd556-2wqnw_calico-apiserver(324c3e73-c2dc-4ae9-8875-3415b8305668): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 18:05:36.460082 kubelet[2847]: E0116 18:05:36.460036 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-2wqnw" podUID="324c3e73-c2dc-4ae9-8875-3415b8305668" Jan 16 18:05:39.099438 kubelet[2847]: E0116 18:05:39.099343 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7995d6d9cc-4t5v5" podUID="803aa494-41ae-4e2d-9b86-fe41697d291d" Jan 16 18:05:39.102122 kubelet[2847]: E0116 18:05:39.102053 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-767775565b-z2w97" podUID="76ab8953-d1f7-498a-b509-62959852b74e" Jan 16 18:05:39.806000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-188.245.199.112:22-68.220.241.50:44922 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:05:39.807097 systemd[1]: Started sshd@11-188.245.199.112:22-68.220.241.50:44922.service - OpenSSH per-connection server daemon (68.220.241.50:44922). Jan 16 18:05:39.812016 kernel: audit: type=1130 audit(1768586739.806:741): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-188.245.199.112:22-68.220.241.50:44922 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:05:40.391000 audit[5049]: USER_ACCT pid=5049 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:40.394121 sshd[5049]: Accepted publickey for core from 68.220.241.50 port 44922 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:05:40.396527 sshd-session[5049]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:05:40.394000 audit[5049]: CRED_ACQ pid=5049 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:40.398857 kernel: audit: type=1101 audit(1768586740.391:742): pid=5049 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:40.398939 kernel: audit: type=1103 audit(1768586740.394:743): pid=5049 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:40.401143 kernel: audit: type=1006 audit(1768586740.394:744): pid=5049 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 16 18:05:40.394000 audit[5049]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdf6634a0 a2=3 a3=0 items=0 ppid=1 pid=5049 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:40.405038 kernel: audit: type=1300 audit(1768586740.394:744): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffdf6634a0 a2=3 a3=0 items=0 ppid=1 pid=5049 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:40.394000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:05:40.408525 kernel: audit: type=1327 audit(1768586740.394:744): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:05:40.411012 systemd-logind[1564]: New session 9 of user core. Jan 16 18:05:40.417378 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 16 18:05:40.424000 audit[5049]: USER_START pid=5049 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:40.428000 audit[5053]: CRED_ACQ pid=5053 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:40.431290 kernel: audit: type=1105 audit(1768586740.424:745): pid=5049 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:40.431367 kernel: audit: type=1103 audit(1768586740.428:746): pid=5053 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:40.828128 sshd[5053]: Connection closed by 68.220.241.50 port 44922 Jan 16 18:05:40.828601 sshd-session[5049]: pam_unix(sshd:session): session closed for user core Jan 16 18:05:40.830000 audit[5049]: USER_END pid=5049 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:40.830000 audit[5049]: CRED_DISP pid=5049 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:40.839105 kernel: audit: type=1106 audit(1768586740.830:747): pid=5049 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:40.839180 kernel: audit: type=1104 audit(1768586740.830:748): pid=5049 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:40.840017 systemd-logind[1564]: Session 9 logged out. Waiting for processes to exit. Jan 16 18:05:40.841177 systemd[1]: sshd@11-188.245.199.112:22-68.220.241.50:44922.service: Deactivated successfully. Jan 16 18:05:40.841000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-188.245.199.112:22-68.220.241.50:44922 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:05:40.845869 systemd[1]: session-9.scope: Deactivated successfully. Jan 16 18:05:40.848637 systemd-logind[1564]: Removed session 9. Jan 16 18:05:41.100651 kubelet[2847]: E0116 18:05:41.100211 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zmqhv" podUID="29a43599-84fe-43df-8ccd-cc9d89aeeb36" Jan 16 18:05:44.099544 kubelet[2847]: E0116 18:05:44.099441 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-lxhzl" podUID="acf5809a-b3e8-41d0-86b6-edf701abdda5" Jan 16 18:05:45.951339 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 18:05:45.951460 kernel: audit: type=1130 audit(1768586745.944:750): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-188.245.199.112:22-68.220.241.50:47612 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:05:45.944000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-188.245.199.112:22-68.220.241.50:47612 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:05:45.945318 systemd[1]: Started sshd@12-188.245.199.112:22-68.220.241.50:47612.service - OpenSSH per-connection server daemon (68.220.241.50:47612). Jan 16 18:05:46.525000 audit[5069]: USER_ACCT pid=5069 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:46.528143 sshd[5069]: Accepted publickey for core from 68.220.241.50 port 47612 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:05:46.535332 kernel: audit: type=1101 audit(1768586746.525:751): pid=5069 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:46.535441 kernel: audit: type=1103 audit(1768586746.530:752): pid=5069 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:46.530000 audit[5069]: CRED_ACQ pid=5069 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:46.532427 sshd-session[5069]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:05:46.537439 kernel: audit: type=1006 audit(1768586746.530:753): pid=5069 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 16 18:05:46.530000 audit[5069]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd9533060 a2=3 a3=0 items=0 ppid=1 pid=5069 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:46.540321 kernel: audit: type=1300 audit(1768586746.530:753): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd9533060 a2=3 a3=0 items=0 ppid=1 pid=5069 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:46.541613 kernel: audit: type=1327 audit(1768586746.530:753): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:05:46.530000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:05:46.548873 systemd-logind[1564]: New session 10 of user core. Jan 16 18:05:46.558684 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 16 18:05:46.564000 audit[5069]: USER_START pid=5069 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:46.570135 kernel: audit: type=1105 audit(1768586746.564:754): pid=5069 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:46.572000 audit[5073]: CRED_ACQ pid=5073 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:46.576011 kernel: audit: type=1103 audit(1768586746.572:755): pid=5073 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:46.936155 sshd[5073]: Connection closed by 68.220.241.50 port 47612 Jan 16 18:05:46.936863 sshd-session[5069]: pam_unix(sshd:session): session closed for user core Jan 16 18:05:46.938000 audit[5069]: USER_END pid=5069 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:46.938000 audit[5069]: CRED_DISP pid=5069 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:46.948503 kernel: audit: type=1106 audit(1768586746.938:756): pid=5069 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:46.948622 kernel: audit: type=1104 audit(1768586746.938:757): pid=5069 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:46.950355 systemd[1]: sshd@12-188.245.199.112:22-68.220.241.50:47612.service: Deactivated successfully. Jan 16 18:05:46.949000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-188.245.199.112:22-68.220.241.50:47612 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:05:46.952851 systemd[1]: session-10.scope: Deactivated successfully. Jan 16 18:05:46.955553 systemd-logind[1564]: Session 10 logged out. Waiting for processes to exit. Jan 16 18:05:46.958506 systemd-logind[1564]: Removed session 10. Jan 16 18:05:49.103196 kubelet[2847]: E0116 18:05:49.103144 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x7tcq" podUID="c7bb61dc-6bdf-4bf7-9a33-c67b671e2820" Jan 16 18:05:50.100443 kubelet[2847]: E0116 18:05:50.100273 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7995d6d9cc-4t5v5" podUID="803aa494-41ae-4e2d-9b86-fe41697d291d" Jan 16 18:05:50.100443 kubelet[2847]: E0116 18:05:50.100382 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-2wqnw" podUID="324c3e73-c2dc-4ae9-8875-3415b8305668" Jan 16 18:05:52.042756 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 18:05:52.042874 kernel: audit: type=1130 audit(1768586752.038:759): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-188.245.199.112:22-68.220.241.50:47614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:05:52.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-188.245.199.112:22-68.220.241.50:47614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:05:52.039302 systemd[1]: Started sshd@13-188.245.199.112:22-68.220.241.50:47614.service - OpenSSH per-connection server daemon (68.220.241.50:47614). Jan 16 18:05:52.104551 kubelet[2847]: E0116 18:05:52.104506 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-767775565b-z2w97" podUID="76ab8953-d1f7-498a-b509-62959852b74e" Jan 16 18:05:52.576473 sshd[5086]: Accepted publickey for core from 68.220.241.50 port 47614 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:05:52.575000 audit[5086]: USER_ACCT pid=5086 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:52.578000 audit[5086]: CRED_ACQ pid=5086 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:52.581908 kernel: audit: type=1101 audit(1768586752.575:760): pid=5086 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:52.582002 kernel: audit: type=1103 audit(1768586752.578:761): pid=5086 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:52.580336 sshd-session[5086]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:05:52.585468 kernel: audit: type=1006 audit(1768586752.578:762): pid=5086 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 16 18:05:52.578000 audit[5086]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffee839c50 a2=3 a3=0 items=0 ppid=1 pid=5086 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:52.578000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:05:52.589651 kernel: audit: type=1300 audit(1768586752.578:762): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffee839c50 a2=3 a3=0 items=0 ppid=1 pid=5086 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:52.589764 kernel: audit: type=1327 audit(1768586752.578:762): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:05:52.591810 systemd-logind[1564]: New session 11 of user core. Jan 16 18:05:52.598202 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 16 18:05:52.603000 audit[5086]: USER_START pid=5086 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:52.606996 kernel: audit: type=1105 audit(1768586752.603:763): pid=5086 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:52.608000 audit[5114]: CRED_ACQ pid=5114 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:52.613971 kernel: audit: type=1103 audit(1768586752.608:764): pid=5114 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:52.980256 sshd[5114]: Connection closed by 68.220.241.50 port 47614 Jan 16 18:05:52.980848 sshd-session[5086]: pam_unix(sshd:session): session closed for user core Jan 16 18:05:52.982000 audit[5086]: USER_END pid=5086 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:52.986000 audit[5086]: CRED_DISP pid=5086 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:52.992249 kernel: audit: type=1106 audit(1768586752.982:765): pid=5086 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:52.992302 kernel: audit: type=1104 audit(1768586752.986:766): pid=5086 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:52.990982 systemd[1]: sshd@13-188.245.199.112:22-68.220.241.50:47614.service: Deactivated successfully. Jan 16 18:05:52.992000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-188.245.199.112:22-68.220.241.50:47614 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:05:52.994995 systemd[1]: session-11.scope: Deactivated successfully. Jan 16 18:05:52.997519 systemd-logind[1564]: Session 11 logged out. Waiting for processes to exit. Jan 16 18:05:53.000440 systemd-logind[1564]: Removed session 11. Jan 16 18:05:53.097629 systemd[1]: Started sshd@14-188.245.199.112:22-68.220.241.50:42516.service - OpenSSH per-connection server daemon (68.220.241.50:42516). Jan 16 18:05:53.096000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-188.245.199.112:22-68.220.241.50:42516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:05:53.660000 audit[5127]: USER_ACCT pid=5127 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:53.662259 sshd[5127]: Accepted publickey for core from 68.220.241.50 port 42516 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:05:53.662000 audit[5127]: CRED_ACQ pid=5127 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:53.663000 audit[5127]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe0b3a8b0 a2=3 a3=0 items=0 ppid=1 pid=5127 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:53.663000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:05:53.665188 sshd-session[5127]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:05:53.671467 systemd-logind[1564]: New session 12 of user core. Jan 16 18:05:53.677412 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 16 18:05:53.682000 audit[5127]: USER_START pid=5127 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:53.685000 audit[5131]: CRED_ACQ pid=5131 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:54.098970 kubelet[2847]: E0116 18:05:54.098347 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zmqhv" podUID="29a43599-84fe-43df-8ccd-cc9d89aeeb36" Jan 16 18:05:54.101066 sshd[5131]: Connection closed by 68.220.241.50 port 42516 Jan 16 18:05:54.104375 sshd-session[5127]: pam_unix(sshd:session): session closed for user core Jan 16 18:05:54.106000 audit[5127]: USER_END pid=5127 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:54.106000 audit[5127]: CRED_DISP pid=5127 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:54.111000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-188.245.199.112:22-68.220.241.50:42516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:05:54.112447 systemd[1]: sshd@14-188.245.199.112:22-68.220.241.50:42516.service: Deactivated successfully. Jan 16 18:05:54.117194 systemd[1]: session-12.scope: Deactivated successfully. Jan 16 18:05:54.121418 systemd-logind[1564]: Session 12 logged out. Waiting for processes to exit. Jan 16 18:05:54.125130 systemd-logind[1564]: Removed session 12. Jan 16 18:05:54.211000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-188.245.199.112:22-68.220.241.50:42526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:05:54.212443 systemd[1]: Started sshd@15-188.245.199.112:22-68.220.241.50:42526.service - OpenSSH per-connection server daemon (68.220.241.50:42526). Jan 16 18:05:54.759000 audit[5141]: USER_ACCT pid=5141 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:54.761099 sshd[5141]: Accepted publickey for core from 68.220.241.50 port 42526 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:05:54.762000 audit[5141]: CRED_ACQ pid=5141 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:54.763000 audit[5141]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcbacc690 a2=3 a3=0 items=0 ppid=1 pid=5141 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:54.763000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:05:54.767724 sshd-session[5141]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:05:54.779976 systemd-logind[1564]: New session 13 of user core. Jan 16 18:05:54.788237 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 16 18:05:54.793000 audit[5141]: USER_START pid=5141 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:54.796000 audit[5145]: CRED_ACQ pid=5145 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:55.165692 sshd[5145]: Connection closed by 68.220.241.50 port 42526 Jan 16 18:05:55.164035 sshd-session[5141]: pam_unix(sshd:session): session closed for user core Jan 16 18:05:55.166000 audit[5141]: USER_END pid=5141 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:55.167000 audit[5141]: CRED_DISP pid=5141 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:05:55.173329 systemd[1]: sshd@15-188.245.199.112:22-68.220.241.50:42526.service: Deactivated successfully. Jan 16 18:05:55.173000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-188.245.199.112:22-68.220.241.50:42526 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:05:55.177833 systemd[1]: session-13.scope: Deactivated successfully. Jan 16 18:05:55.180025 systemd-logind[1564]: Session 13 logged out. Waiting for processes to exit. Jan 16 18:05:55.183748 systemd-logind[1564]: Removed session 13. Jan 16 18:05:56.100379 kubelet[2847]: E0116 18:05:56.100039 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-lxhzl" podUID="acf5809a-b3e8-41d0-86b6-edf701abdda5" Jan 16 18:06:00.280414 systemd[1]: Started sshd@16-188.245.199.112:22-68.220.241.50:42532.service - OpenSSH per-connection server daemon (68.220.241.50:42532). Jan 16 18:06:00.279000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-188.245.199.112:22-68.220.241.50:42532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:00.282653 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 16 18:06:00.282840 kernel: audit: type=1130 audit(1768586760.279:786): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-188.245.199.112:22-68.220.241.50:42532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:00.823426 sshd[5164]: Accepted publickey for core from 68.220.241.50 port 42532 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:06:00.822000 audit[5164]: USER_ACCT pid=5164 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:00.830014 kernel: audit: type=1101 audit(1768586760.822:787): pid=5164 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:00.830114 kernel: audit: type=1103 audit(1768586760.826:788): pid=5164 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:00.826000 audit[5164]: CRED_ACQ pid=5164 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:00.830600 sshd-session[5164]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:06:00.835328 kernel: audit: type=1006 audit(1768586760.826:789): pid=5164 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 16 18:06:00.835633 kernel: audit: type=1300 audit(1768586760.826:789): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffade22a0 a2=3 a3=0 items=0 ppid=1 pid=5164 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:00.826000 audit[5164]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffade22a0 a2=3 a3=0 items=0 ppid=1 pid=5164 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:00.826000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:00.837617 kernel: audit: type=1327 audit(1768586760.826:789): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:00.844169 systemd-logind[1564]: New session 14 of user core. Jan 16 18:06:00.849193 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 16 18:06:00.853000 audit[5164]: USER_START pid=5164 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:00.856000 audit[5168]: CRED_ACQ pid=5168 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:00.861463 kernel: audit: type=1105 audit(1768586760.853:790): pid=5164 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:00.861541 kernel: audit: type=1103 audit(1768586760.856:791): pid=5168 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:01.213460 sshd[5168]: Connection closed by 68.220.241.50 port 42532 Jan 16 18:06:01.214237 sshd-session[5164]: pam_unix(sshd:session): session closed for user core Jan 16 18:06:01.218000 audit[5164]: USER_END pid=5164 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:01.221000 audit[5164]: CRED_DISP pid=5164 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:01.227499 kernel: audit: type=1106 audit(1768586761.218:792): pid=5164 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:01.227575 kernel: audit: type=1104 audit(1768586761.221:793): pid=5164 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:01.226581 systemd-logind[1564]: Session 14 logged out. Waiting for processes to exit. Jan 16 18:06:01.228863 systemd[1]: sshd@16-188.245.199.112:22-68.220.241.50:42532.service: Deactivated successfully. Jan 16 18:06:01.228000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-188.245.199.112:22-68.220.241.50:42532 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:01.232043 systemd[1]: session-14.scope: Deactivated successfully. Jan 16 18:06:01.235712 systemd-logind[1564]: Removed session 14. Jan 16 18:06:02.101978 kubelet[2847]: E0116 18:06:02.100481 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-2wqnw" podUID="324c3e73-c2dc-4ae9-8875-3415b8305668" Jan 16 18:06:02.102995 kubelet[2847]: E0116 18:06:02.102741 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x7tcq" podUID="c7bb61dc-6bdf-4bf7-9a33-c67b671e2820" Jan 16 18:06:03.100986 kubelet[2847]: E0116 18:06:03.100228 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7995d6d9cc-4t5v5" podUID="803aa494-41ae-4e2d-9b86-fe41697d291d" Jan 16 18:06:06.331281 systemd[1]: Started sshd@17-188.245.199.112:22-68.220.241.50:39092.service - OpenSSH per-connection server daemon (68.220.241.50:39092). Jan 16 18:06:06.334774 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 18:06:06.334806 kernel: audit: type=1130 audit(1768586766.330:795): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-188.245.199.112:22-68.220.241.50:39092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:06.330000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-188.245.199.112:22-68.220.241.50:39092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:06.883000 audit[5181]: USER_ACCT pid=5181 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:06.886989 sshd[5181]: Accepted publickey for core from 68.220.241.50 port 39092 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:06:06.886000 audit[5181]: CRED_ACQ pid=5181 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:06.889739 kernel: audit: type=1101 audit(1768586766.883:796): pid=5181 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:06.889822 kernel: audit: type=1103 audit(1768586766.886:797): pid=5181 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:06.888209 sshd-session[5181]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:06:06.894257 kernel: audit: type=1006 audit(1768586766.886:798): pid=5181 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 16 18:06:06.894352 kernel: audit: type=1300 audit(1768586766.886:798): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffddd5d390 a2=3 a3=0 items=0 ppid=1 pid=5181 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:06.886000 audit[5181]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffddd5d390 a2=3 a3=0 items=0 ppid=1 pid=5181 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:06.886000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:06.896971 kernel: audit: type=1327 audit(1768586766.886:798): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:06.900549 systemd-logind[1564]: New session 15 of user core. Jan 16 18:06:06.906208 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 16 18:06:06.912000 audit[5181]: USER_START pid=5181 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:06.918122 kernel: audit: type=1105 audit(1768586766.912:799): pid=5181 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:06.917000 audit[5185]: CRED_ACQ pid=5185 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:06.924015 kernel: audit: type=1103 audit(1768586766.917:800): pid=5185 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:07.103343 kubelet[2847]: E0116 18:06:07.103241 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-767775565b-z2w97" podUID="76ab8953-d1f7-498a-b509-62959852b74e" Jan 16 18:06:07.343603 sshd[5185]: Connection closed by 68.220.241.50 port 39092 Jan 16 18:06:07.346272 sshd-session[5181]: pam_unix(sshd:session): session closed for user core Jan 16 18:06:07.348000 audit[5181]: USER_END pid=5181 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:07.358357 kernel: audit: type=1106 audit(1768586767.348:801): pid=5181 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:07.358455 kernel: audit: type=1104 audit(1768586767.348:802): pid=5181 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:07.348000 audit[5181]: CRED_DISP pid=5181 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:07.359333 systemd[1]: sshd@17-188.245.199.112:22-68.220.241.50:39092.service: Deactivated successfully. Jan 16 18:06:07.359000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-188.245.199.112:22-68.220.241.50:39092 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:07.363793 systemd[1]: session-15.scope: Deactivated successfully. Jan 16 18:06:07.367162 systemd-logind[1564]: Session 15 logged out. Waiting for processes to exit. Jan 16 18:06:07.371601 systemd-logind[1564]: Removed session 15. Jan 16 18:06:08.099089 kubelet[2847]: E0116 18:06:08.099040 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zmqhv" podUID="29a43599-84fe-43df-8ccd-cc9d89aeeb36" Jan 16 18:06:09.112478 kubelet[2847]: E0116 18:06:09.111430 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-lxhzl" podUID="acf5809a-b3e8-41d0-86b6-edf701abdda5" Jan 16 18:06:12.454193 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 18:06:12.454332 kernel: audit: type=1130 audit(1768586772.447:804): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-188.245.199.112:22-68.220.241.50:39102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:12.447000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-188.245.199.112:22-68.220.241.50:39102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:12.448210 systemd[1]: Started sshd@18-188.245.199.112:22-68.220.241.50:39102.service - OpenSSH per-connection server daemon (68.220.241.50:39102). Jan 16 18:06:12.988000 audit[5199]: USER_ACCT pid=5199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:12.989571 sshd[5199]: Accepted publickey for core from 68.220.241.50 port 39102 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:06:12.994984 kernel: audit: type=1101 audit(1768586772.988:805): pid=5199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:12.995122 kernel: audit: type=1103 audit(1768586772.992:806): pid=5199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:12.992000 audit[5199]: CRED_ACQ pid=5199 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:12.995699 sshd-session[5199]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:06:12.997196 kernel: audit: type=1006 audit(1768586772.992:807): pid=5199 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 16 18:06:12.997360 kernel: audit: type=1300 audit(1768586772.992:807): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcc986970 a2=3 a3=0 items=0 ppid=1 pid=5199 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:12.992000 audit[5199]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcc986970 a2=3 a3=0 items=0 ppid=1 pid=5199 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:12.992000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:13.001047 kernel: audit: type=1327 audit(1768586772.992:807): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:13.006638 systemd-logind[1564]: New session 16 of user core. Jan 16 18:06:13.013217 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 16 18:06:13.017000 audit[5199]: USER_START pid=5199 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:13.022000 audit[5203]: CRED_ACQ pid=5203 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:13.025794 kernel: audit: type=1105 audit(1768586773.017:808): pid=5199 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:13.026016 kernel: audit: type=1103 audit(1768586773.022:809): pid=5203 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:13.363609 sshd[5203]: Connection closed by 68.220.241.50 port 39102 Jan 16 18:06:13.364913 sshd-session[5199]: pam_unix(sshd:session): session closed for user core Jan 16 18:06:13.366000 audit[5199]: USER_END pid=5199 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:13.366000 audit[5199]: CRED_DISP pid=5199 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:13.372490 kernel: audit: type=1106 audit(1768586773.366:810): pid=5199 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:13.372570 kernel: audit: type=1104 audit(1768586773.366:811): pid=5199 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:13.373444 systemd[1]: sshd@18-188.245.199.112:22-68.220.241.50:39102.service: Deactivated successfully. Jan 16 18:06:13.373000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-188.245.199.112:22-68.220.241.50:39102 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:13.377052 systemd[1]: session-16.scope: Deactivated successfully. Jan 16 18:06:13.380440 systemd-logind[1564]: Session 16 logged out. Waiting for processes to exit. Jan 16 18:06:13.382054 systemd-logind[1564]: Removed session 16. Jan 16 18:06:13.480263 systemd[1]: Started sshd@19-188.245.199.112:22-68.220.241.50:49216.service - OpenSSH per-connection server daemon (68.220.241.50:49216). Jan 16 18:06:13.479000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-188.245.199.112:22-68.220.241.50:49216 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:14.036000 audit[5215]: USER_ACCT pid=5215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:14.037385 sshd[5215]: Accepted publickey for core from 68.220.241.50 port 49216 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:06:14.037000 audit[5215]: CRED_ACQ pid=5215 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:14.037000 audit[5215]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe5a61bb0 a2=3 a3=0 items=0 ppid=1 pid=5215 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:14.037000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:14.039575 sshd-session[5215]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:06:14.045474 systemd-logind[1564]: New session 17 of user core. Jan 16 18:06:14.053290 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 16 18:06:14.061000 audit[5215]: USER_START pid=5215 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:14.063000 audit[5219]: CRED_ACQ pid=5219 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:14.599363 sshd[5219]: Connection closed by 68.220.241.50 port 49216 Jan 16 18:06:14.600241 sshd-session[5215]: pam_unix(sshd:session): session closed for user core Jan 16 18:06:14.604000 audit[5215]: USER_END pid=5215 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:14.604000 audit[5215]: CRED_DISP pid=5215 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:14.610897 systemd[1]: sshd@19-188.245.199.112:22-68.220.241.50:49216.service: Deactivated successfully. Jan 16 18:06:14.613000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-188.245.199.112:22-68.220.241.50:49216 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:14.617048 systemd[1]: session-17.scope: Deactivated successfully. Jan 16 18:06:14.624313 systemd-logind[1564]: Session 17 logged out. Waiting for processes to exit. Jan 16 18:06:14.626753 systemd-logind[1564]: Removed session 17. Jan 16 18:06:14.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-188.245.199.112:22-68.220.241.50:49230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:14.709324 systemd[1]: Started sshd@20-188.245.199.112:22-68.220.241.50:49230.service - OpenSSH per-connection server daemon (68.220.241.50:49230). Jan 16 18:06:15.100633 kubelet[2847]: E0116 18:06:15.100180 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-2wqnw" podUID="324c3e73-c2dc-4ae9-8875-3415b8305668" Jan 16 18:06:15.249000 audit[5229]: USER_ACCT pid=5229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:15.250616 sshd[5229]: Accepted publickey for core from 68.220.241.50 port 49230 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:06:15.252000 audit[5229]: CRED_ACQ pid=5229 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:15.252000 audit[5229]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffef0374e0 a2=3 a3=0 items=0 ppid=1 pid=5229 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:15.252000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:15.254431 sshd-session[5229]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:06:15.263839 systemd-logind[1564]: New session 18 of user core. Jan 16 18:06:15.272231 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 16 18:06:15.277000 audit[5229]: USER_START pid=5229 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:15.280000 audit[5235]: CRED_ACQ pid=5235 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:16.190000 audit[5250]: NETFILTER_CFG table=filter:146 family=2 entries=26 op=nft_register_rule pid=5250 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:06:16.190000 audit[5250]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffd4ef0990 a2=0 a3=1 items=0 ppid=2953 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:16.190000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:06:16.201000 audit[5250]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5250 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:06:16.201000 audit[5250]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd4ef0990 a2=0 a3=1 items=0 ppid=2953 pid=5250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:16.201000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:06:16.239000 audit[5252]: NETFILTER_CFG table=filter:148 family=2 entries=38 op=nft_register_rule pid=5252 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:06:16.239000 audit[5252]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffd0c8af70 a2=0 a3=1 items=0 ppid=2953 pid=5252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:16.239000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:06:16.244000 audit[5252]: NETFILTER_CFG table=nat:149 family=2 entries=20 op=nft_register_rule pid=5252 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:06:16.244000 audit[5252]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd0c8af70 a2=0 a3=1 items=0 ppid=2953 pid=5252 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:16.244000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:06:16.273052 sshd[5235]: Connection closed by 68.220.241.50 port 49230 Jan 16 18:06:16.274014 sshd-session[5229]: pam_unix(sshd:session): session closed for user core Jan 16 18:06:16.276000 audit[5229]: USER_END pid=5229 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:16.276000 audit[5229]: CRED_DISP pid=5229 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:16.284285 systemd[1]: sshd@20-188.245.199.112:22-68.220.241.50:49230.service: Deactivated successfully. Jan 16 18:06:16.284000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-188.245.199.112:22-68.220.241.50:49230 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:16.289686 systemd[1]: session-18.scope: Deactivated successfully. Jan 16 18:06:16.292050 systemd-logind[1564]: Session 18 logged out. Waiting for processes to exit. Jan 16 18:06:16.294540 systemd-logind[1564]: Removed session 18. Jan 16 18:06:16.387000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-188.245.199.112:22-68.220.241.50:49240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:16.388583 systemd[1]: Started sshd@21-188.245.199.112:22-68.220.241.50:49240.service - OpenSSH per-connection server daemon (68.220.241.50:49240). Jan 16 18:06:16.970000 audit[5257]: USER_ACCT pid=5257 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:16.972076 sshd[5257]: Accepted publickey for core from 68.220.241.50 port 49240 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:06:16.974000 audit[5257]: CRED_ACQ pid=5257 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:16.974000 audit[5257]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffea2fe680 a2=3 a3=0 items=0 ppid=1 pid=5257 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:16.974000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:16.976168 sshd-session[5257]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:06:16.985324 systemd-logind[1564]: New session 19 of user core. Jan 16 18:06:16.993214 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 16 18:06:16.997000 audit[5257]: USER_START pid=5257 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:16.999000 audit[5261]: CRED_ACQ pid=5261 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:17.103703 kubelet[2847]: E0116 18:06:17.103642 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x7tcq" podUID="c7bb61dc-6bdf-4bf7-9a33-c67b671e2820" Jan 16 18:06:17.110992 kubelet[2847]: E0116 18:06:17.100270 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7995d6d9cc-4t5v5" podUID="803aa494-41ae-4e2d-9b86-fe41697d291d" Jan 16 18:06:17.586884 sshd[5261]: Connection closed by 68.220.241.50 port 49240 Jan 16 18:06:17.587630 sshd-session[5257]: pam_unix(sshd:session): session closed for user core Jan 16 18:06:17.594033 kernel: kauditd_printk_skb: 43 callbacks suppressed Jan 16 18:06:17.594173 kernel: audit: type=1106 audit(1768586777.587:841): pid=5257 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:17.587000 audit[5257]: USER_END pid=5257 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:17.588000 audit[5257]: CRED_DISP pid=5257 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:17.597856 kernel: audit: type=1104 audit(1768586777.588:842): pid=5257 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:17.596447 systemd-logind[1564]: Session 19 logged out. Waiting for processes to exit. Jan 16 18:06:17.599846 systemd[1]: sshd@21-188.245.199.112:22-68.220.241.50:49240.service: Deactivated successfully. Jan 16 18:06:17.599000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-188.245.199.112:22-68.220.241.50:49240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:17.601938 systemd[1]: session-19.scope: Deactivated successfully. Jan 16 18:06:17.602977 kernel: audit: type=1131 audit(1768586777.599:843): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-188.245.199.112:22-68.220.241.50:49240 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:17.607100 systemd-logind[1564]: Removed session 19. Jan 16 18:06:17.707000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-188.245.199.112:22-68.220.241.50:49242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:17.708295 systemd[1]: Started sshd@22-188.245.199.112:22-68.220.241.50:49242.service - OpenSSH per-connection server daemon (68.220.241.50:49242). Jan 16 18:06:17.712994 kernel: audit: type=1130 audit(1768586777.707:844): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-188.245.199.112:22-68.220.241.50:49242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:18.290000 audit[5271]: USER_ACCT pid=5271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:18.291966 sshd[5271]: Accepted publickey for core from 68.220.241.50 port 49242 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:06:18.294048 kernel: audit: type=1101 audit(1768586778.290:845): pid=5271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:18.294000 audit[5271]: CRED_ACQ pid=5271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:18.296774 sshd-session[5271]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:06:18.298830 kernel: audit: type=1103 audit(1768586778.294:846): pid=5271 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:18.298889 kernel: audit: type=1006 audit(1768586778.294:847): pid=5271 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 16 18:06:18.294000 audit[5271]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd69c10c0 a2=3 a3=0 items=0 ppid=1 pid=5271 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:18.301306 kernel: audit: type=1300 audit(1768586778.294:847): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd69c10c0 a2=3 a3=0 items=0 ppid=1 pid=5271 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:18.294000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:18.302334 kernel: audit: type=1327 audit(1768586778.294:847): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:18.308081 systemd-logind[1564]: New session 20 of user core. Jan 16 18:06:18.313388 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 16 18:06:18.316000 audit[5271]: USER_START pid=5271 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:18.320118 kernel: audit: type=1105 audit(1768586778.316:848): pid=5271 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:18.319000 audit[5275]: CRED_ACQ pid=5275 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:18.698523 sshd[5275]: Connection closed by 68.220.241.50 port 49242 Jan 16 18:06:18.700886 sshd-session[5271]: pam_unix(sshd:session): session closed for user core Jan 16 18:06:18.701000 audit[5271]: USER_END pid=5271 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:18.702000 audit[5271]: CRED_DISP pid=5271 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:18.707404 systemd[1]: sshd@22-188.245.199.112:22-68.220.241.50:49242.service: Deactivated successfully. Jan 16 18:06:18.706000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-188.245.199.112:22-68.220.241.50:49242 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:18.710878 systemd[1]: session-20.scope: Deactivated successfully. Jan 16 18:06:18.712136 systemd-logind[1564]: Session 20 logged out. Waiting for processes to exit. Jan 16 18:06:18.713998 systemd-logind[1564]: Removed session 20. Jan 16 18:06:21.100035 kubelet[2847]: E0116 18:06:21.099850 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-lxhzl" podUID="acf5809a-b3e8-41d0-86b6-edf701abdda5" Jan 16 18:06:21.361000 audit[5287]: NETFILTER_CFG table=filter:150 family=2 entries=26 op=nft_register_rule pid=5287 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:06:21.361000 audit[5287]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe5c1f5c0 a2=0 a3=1 items=0 ppid=2953 pid=5287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:21.361000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:06:21.368000 audit[5287]: NETFILTER_CFG table=nat:151 family=2 entries=104 op=nft_register_chain pid=5287 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:06:21.368000 audit[5287]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffe5c1f5c0 a2=0 a3=1 items=0 ppid=2953 pid=5287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:21.368000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:06:22.104315 kubelet[2847]: E0116 18:06:22.104255 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-767775565b-z2w97" podUID="76ab8953-d1f7-498a-b509-62959852b74e" Jan 16 18:06:23.101034 kubelet[2847]: E0116 18:06:23.100941 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zmqhv" podUID="29a43599-84fe-43df-8ccd-cc9d89aeeb36" Jan 16 18:06:23.818974 kernel: kauditd_printk_skb: 10 callbacks suppressed Jan 16 18:06:23.819134 kernel: audit: type=1130 audit(1768586783.814:855): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-188.245.199.112:22-68.220.241.50:39502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:23.814000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-188.245.199.112:22-68.220.241.50:39502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:23.815352 systemd[1]: Started sshd@23-188.245.199.112:22-68.220.241.50:39502.service - OpenSSH per-connection server daemon (68.220.241.50:39502). Jan 16 18:06:24.412000 audit[5313]: USER_ACCT pid=5313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:24.417849 sshd[5313]: Accepted publickey for core from 68.220.241.50 port 39502 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:06:24.420909 kernel: audit: type=1101 audit(1768586784.412:856): pid=5313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:24.420941 kernel: audit: type=1103 audit(1768586784.417:857): pid=5313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:24.417000 audit[5313]: CRED_ACQ pid=5313 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:24.421930 sshd-session[5313]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:06:24.423955 kernel: audit: type=1006 audit(1768586784.417:858): pid=5313 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 16 18:06:24.417000 audit[5313]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffee7be080 a2=3 a3=0 items=0 ppid=1 pid=5313 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:24.427535 kernel: audit: type=1300 audit(1768586784.417:858): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffee7be080 a2=3 a3=0 items=0 ppid=1 pid=5313 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:24.417000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:24.428651 kernel: audit: type=1327 audit(1768586784.417:858): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:24.435358 systemd-logind[1564]: New session 21 of user core. Jan 16 18:06:24.440258 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 16 18:06:24.445000 audit[5313]: USER_START pid=5313 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:24.449000 audit[5317]: CRED_ACQ pid=5317 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:24.451268 kernel: audit: type=1105 audit(1768586784.445:859): pid=5313 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:24.454010 kernel: audit: type=1103 audit(1768586784.449:860): pid=5317 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:24.822696 sshd[5317]: Connection closed by 68.220.241.50 port 39502 Jan 16 18:06:24.825480 sshd-session[5313]: pam_unix(sshd:session): session closed for user core Jan 16 18:06:24.826000 audit[5313]: USER_END pid=5313 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:24.826000 audit[5313]: CRED_DISP pid=5313 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:24.834613 systemd[1]: sshd@23-188.245.199.112:22-68.220.241.50:39502.service: Deactivated successfully. Jan 16 18:06:24.836789 kernel: audit: type=1106 audit(1768586784.826:861): pid=5313 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:24.836889 kernel: audit: type=1104 audit(1768586784.826:862): pid=5313 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:24.836000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@23-188.245.199.112:22-68.220.241.50:39502 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:24.844309 systemd[1]: session-21.scope: Deactivated successfully. Jan 16 18:06:24.847152 systemd-logind[1564]: Session 21 logged out. Waiting for processes to exit. Jan 16 18:06:24.849998 systemd-logind[1564]: Removed session 21. Jan 16 18:06:26.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-188.245.199.112:22-210.79.142.221:42516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:26.709362 systemd[1]: Started sshd@24-188.245.199.112:22-210.79.142.221:42516.service - OpenSSH per-connection server daemon (210.79.142.221:42516). Jan 16 18:06:27.100394 kubelet[2847]: E0116 18:06:27.099813 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-2wqnw" podUID="324c3e73-c2dc-4ae9-8875-3415b8305668" Jan 16 18:06:28.316119 sshd[5328]: Received disconnect from 210.79.142.221 port 42516:11: Bye Bye [preauth] Jan 16 18:06:28.316119 sshd[5328]: Disconnected from authenticating user root 210.79.142.221 port 42516 [preauth] Jan 16 18:06:28.315000 audit[5328]: USER_ERR pid=5328 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:bad_ident grantors=? acct="?" exe="/usr/lib64/misc/sshd-session" hostname=210.79.142.221 addr=210.79.142.221 terminal=ssh res=failed' Jan 16 18:06:28.319222 systemd[1]: sshd@24-188.245.199.112:22-210.79.142.221:42516.service: Deactivated successfully. Jan 16 18:06:28.318000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@24-188.245.199.112:22-210.79.142.221:42516 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:29.940847 kernel: kauditd_printk_skb: 4 callbacks suppressed Jan 16 18:06:29.941028 kernel: audit: type=1130 audit(1768586789.936:867): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-188.245.199.112:22-68.220.241.50:39508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:29.936000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-188.245.199.112:22-68.220.241.50:39508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:29.936843 systemd[1]: Started sshd@25-188.245.199.112:22-68.220.241.50:39508.service - OpenSSH per-connection server daemon (68.220.241.50:39508). Jan 16 18:06:30.101006 kubelet[2847]: E0116 18:06:30.098465 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7995d6d9cc-4t5v5" podUID="803aa494-41ae-4e2d-9b86-fe41697d291d" Jan 16 18:06:30.513000 audit[5334]: USER_ACCT pid=5334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:30.516740 sshd[5334]: Accepted publickey for core from 68.220.241.50 port 39508 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:06:30.518261 kernel: audit: type=1101 audit(1768586790.513:868): pid=5334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:30.518000 audit[5334]: CRED_ACQ pid=5334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:30.522470 kernel: audit: type=1103 audit(1768586790.518:869): pid=5334 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:30.522575 kernel: audit: type=1006 audit(1768586790.518:870): pid=5334 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 16 18:06:30.521501 sshd-session[5334]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:06:30.518000 audit[5334]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffea4cccb0 a2=3 a3=0 items=0 ppid=1 pid=5334 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:30.526474 kernel: audit: type=1300 audit(1768586790.518:870): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffea4cccb0 a2=3 a3=0 items=0 ppid=1 pid=5334 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:06:30.518000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:30.528099 kernel: audit: type=1327 audit(1768586790.518:870): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:06:30.531801 systemd-logind[1564]: New session 22 of user core. Jan 16 18:06:30.540865 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 16 18:06:30.544000 audit[5334]: USER_START pid=5334 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:30.549019 kernel: audit: type=1105 audit(1768586790.544:871): pid=5334 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:30.548000 audit[5338]: CRED_ACQ pid=5338 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:30.553994 kernel: audit: type=1103 audit(1768586790.548:872): pid=5338 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:30.919212 sshd[5338]: Connection closed by 68.220.241.50 port 39508 Jan 16 18:06:30.919456 sshd-session[5334]: pam_unix(sshd:session): session closed for user core Jan 16 18:06:30.923000 audit[5334]: USER_END pid=5334 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:30.923000 audit[5334]: CRED_DISP pid=5334 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:30.928544 kernel: audit: type=1106 audit(1768586790.923:873): pid=5334 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:30.928632 kernel: audit: type=1104 audit(1768586790.923:874): pid=5334 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:06:30.929585 systemd[1]: sshd@25-188.245.199.112:22-68.220.241.50:39508.service: Deactivated successfully. Jan 16 18:06:30.929000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@25-188.245.199.112:22-68.220.241.50:39508 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:06:30.932560 systemd[1]: session-22.scope: Deactivated successfully. Jan 16 18:06:30.935645 systemd-logind[1564]: Session 22 logged out. Waiting for processes to exit. Jan 16 18:06:30.940045 systemd-logind[1564]: Removed session 22. Jan 16 18:06:32.101091 kubelet[2847]: E0116 18:06:32.100995 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x7tcq" podUID="c7bb61dc-6bdf-4bf7-9a33-c67b671e2820" Jan 16 18:06:35.102523 kubelet[2847]: E0116 18:06:35.102011 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zmqhv" podUID="29a43599-84fe-43df-8ccd-cc9d89aeeb36" Jan 16 18:06:35.105653 kubelet[2847]: E0116 18:06:35.105606 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-767775565b-z2w97" podUID="76ab8953-d1f7-498a-b509-62959852b74e" Jan 16 18:06:36.098380 kubelet[2847]: E0116 18:06:36.098326 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-lxhzl" podUID="acf5809a-b3e8-41d0-86b6-edf701abdda5" Jan 16 18:06:38.099329 kubelet[2847]: E0116 18:06:38.099270 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-2wqnw" podUID="324c3e73-c2dc-4ae9-8875-3415b8305668" Jan 16 18:06:42.101929 kubelet[2847]: E0116 18:06:42.101035 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7995d6d9cc-4t5v5" podUID="803aa494-41ae-4e2d-9b86-fe41697d291d" Jan 16 18:06:46.100187 containerd[1593]: time="2026-01-16T18:06:46.100133515Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 18:06:46.433926 containerd[1593]: time="2026-01-16T18:06:46.433595903Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:06:46.435407 containerd[1593]: time="2026-01-16T18:06:46.435197099Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 18:06:46.435407 containerd[1593]: time="2026-01-16T18:06:46.435330855Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 16 18:06:46.435983 kubelet[2847]: E0116 18:06:46.435861 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 18:06:46.437088 kubelet[2847]: E0116 18:06:46.436509 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 18:06:46.437088 kubelet[2847]: E0116 18:06:46.436678 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:9acb3dbfadba421ba67d8f0d97111c75,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-27htv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-767775565b-z2w97_calico-system(76ab8953-d1f7-498a-b509-62959852b74e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 18:06:46.440082 containerd[1593]: time="2026-01-16T18:06:46.440033886Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 18:06:46.797352 containerd[1593]: time="2026-01-16T18:06:46.797217101Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:06:46.799050 containerd[1593]: time="2026-01-16T18:06:46.798864696Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 18:06:46.799581 containerd[1593]: time="2026-01-16T18:06:46.799188007Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 16 18:06:46.799792 kubelet[2847]: E0116 18:06:46.799676 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 18:06:46.799792 kubelet[2847]: E0116 18:06:46.799726 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 18:06:46.799889 kubelet[2847]: E0116 18:06:46.799841 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-27htv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-767775565b-z2w97_calico-system(76ab8953-d1f7-498a-b509-62959852b74e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 18:06:46.801438 kubelet[2847]: E0116 18:06:46.801358 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-767775565b-z2w97" podUID="76ab8953-d1f7-498a-b509-62959852b74e" Jan 16 18:06:47.101234 kubelet[2847]: E0116 18:06:47.100731 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-lxhzl" podUID="acf5809a-b3e8-41d0-86b6-edf701abdda5" Jan 16 18:06:47.103041 kubelet[2847]: E0116 18:06:47.102999 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zmqhv" podUID="29a43599-84fe-43df-8ccd-cc9d89aeeb36" Jan 16 18:06:47.103651 kubelet[2847]: E0116 18:06:47.103580 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x7tcq" podUID="c7bb61dc-6bdf-4bf7-9a33-c67b671e2820" Jan 16 18:06:51.102967 kubelet[2847]: E0116 18:06:51.102846 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-2wqnw" podUID="324c3e73-c2dc-4ae9-8875-3415b8305668" Jan 16 18:06:55.103270 containerd[1593]: time="2026-01-16T18:06:55.103233504Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 18:06:55.460049 containerd[1593]: time="2026-01-16T18:06:55.459822209Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:06:55.461631 containerd[1593]: time="2026-01-16T18:06:55.461540616Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 18:06:55.461916 containerd[1593]: time="2026-01-16T18:06:55.461774492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 16 18:06:55.462092 kubelet[2847]: E0116 18:06:55.462039 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 18:06:55.462548 kubelet[2847]: E0116 18:06:55.462102 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 18:06:55.462548 kubelet[2847]: E0116 18:06:55.462238 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ft29j,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-7995d6d9cc-4t5v5_calico-system(803aa494-41ae-4e2d-9b86-fe41697d291d): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 18:06:55.463454 kubelet[2847]: E0116 18:06:55.463420 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-7995d6d9cc-4t5v5" podUID="803aa494-41ae-4e2d-9b86-fe41697d291d" Jan 16 18:06:58.099329 containerd[1593]: time="2026-01-16T18:06:58.099189641Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 18:06:58.446903 containerd[1593]: time="2026-01-16T18:06:58.446474978Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:06:58.449168 containerd[1593]: time="2026-01-16T18:06:58.448943337Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 18:06:58.449168 containerd[1593]: time="2026-01-16T18:06:58.449090535Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 16 18:06:58.449990 kubelet[2847]: E0116 18:06:58.449655 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 18:06:58.449990 kubelet[2847]: E0116 18:06:58.449717 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 18:06:58.449990 kubelet[2847]: E0116 18:06:58.449857 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bx5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x7tcq_calico-system(c7bb61dc-6bdf-4bf7-9a33-c67b671e2820): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 18:06:58.452050 containerd[1593]: time="2026-01-16T18:06:58.451934888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 18:06:58.805419 containerd[1593]: time="2026-01-16T18:06:58.805136328Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:06:58.807643 containerd[1593]: time="2026-01-16T18:06:58.807545968Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 18:06:58.807991 containerd[1593]: time="2026-01-16T18:06:58.807587008Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 16 18:06:58.808433 kubelet[2847]: E0116 18:06:58.808371 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 18:06:58.808433 kubelet[2847]: E0116 18:06:58.808448 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 18:06:58.808801 kubelet[2847]: E0116 18:06:58.808644 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7bx5f,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-x7tcq_calico-system(c7bb61dc-6bdf-4bf7-9a33-c67b671e2820): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 18:06:58.809928 kubelet[2847]: E0116 18:06:58.809867 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-x7tcq" podUID="c7bb61dc-6bdf-4bf7-9a33-c67b671e2820" Jan 16 18:07:01.102321 kubelet[2847]: E0116 18:07:01.101809 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-767775565b-z2w97" podUID="76ab8953-d1f7-498a-b509-62959852b74e" Jan 16 18:07:01.103401 containerd[1593]: time="2026-01-16T18:07:01.103163406Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 18:07:01.464716 containerd[1593]: time="2026-01-16T18:07:01.464257405Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:07:01.466576 containerd[1593]: time="2026-01-16T18:07:01.466482694Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 18:07:01.466900 containerd[1593]: time="2026-01-16T18:07:01.466756290Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 18:07:01.468012 kubelet[2847]: E0116 18:07:01.467705 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:07:01.468012 kubelet[2847]: E0116 18:07:01.467763 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:07:01.468757 kubelet[2847]: E0116 18:07:01.467920 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-5x7th,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-795d7bd556-lxhzl_calico-apiserver(acf5809a-b3e8-41d0-86b6-edf701abdda5): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 18:07:01.470143 kubelet[2847]: E0116 18:07:01.469871 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-lxhzl" podUID="acf5809a-b3e8-41d0-86b6-edf701abdda5" Jan 16 18:07:01.529873 systemd[1]: cri-containerd-47313bec8013a16112684727cc721a46fe5f7d6bbad0d4a2375850cefee80ea5.scope: Deactivated successfully. Jan 16 18:07:01.531441 systemd[1]: cri-containerd-47313bec8013a16112684727cc721a46fe5f7d6bbad0d4a2375850cefee80ea5.scope: Consumed 43.034s CPU time, 98M memory peak. Jan 16 18:07:01.541585 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 18:07:01.541743 kernel: audit: type=1334 audit(1768586821.538:876): prog-id=144 op=UNLOAD Jan 16 18:07:01.538000 audit: BPF prog-id=144 op=UNLOAD Jan 16 18:07:01.541857 containerd[1593]: time="2026-01-16T18:07:01.540670618Z" level=info msg="received container exit event container_id:\"47313bec8013a16112684727cc721a46fe5f7d6bbad0d4a2375850cefee80ea5\" id:\"47313bec8013a16112684727cc721a46fe5f7d6bbad0d4a2375850cefee80ea5\" pid:3176 exit_status:1 exited_at:{seconds:1768586821 nanos:538844523}" Jan 16 18:07:01.538000 audit: BPF prog-id=148 op=UNLOAD Jan 16 18:07:01.544464 kernel: audit: type=1334 audit(1768586821.538:877): prog-id=148 op=UNLOAD Jan 16 18:07:01.584723 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-47313bec8013a16112684727cc721a46fe5f7d6bbad0d4a2375850cefee80ea5-rootfs.mount: Deactivated successfully. Jan 16 18:07:01.887380 systemd[1]: cri-containerd-64985ee81b99061295ed229941ba69597e88ab78d68e33ea9a689d30ed191eba.scope: Deactivated successfully. Jan 16 18:07:01.888826 systemd[1]: cri-containerd-64985ee81b99061295ed229941ba69597e88ab78d68e33ea9a689d30ed191eba.scope: Consumed 4.544s CPU time, 63.8M memory peak, 3M read from disk. Jan 16 18:07:01.891000 audit: BPF prog-id=101 op=UNLOAD Jan 16 18:07:01.894567 containerd[1593]: time="2026-01-16T18:07:01.894510157Z" level=info msg="received container exit event container_id:\"64985ee81b99061295ed229941ba69597e88ab78d68e33ea9a689d30ed191eba\" id:\"64985ee81b99061295ed229941ba69597e88ab78d68e33ea9a689d30ed191eba\" pid:2691 exit_status:1 exited_at:{seconds:1768586821 nanos:889902742}" Jan 16 18:07:01.895572 kernel: audit: type=1334 audit(1768586821.891:878): prog-id=101 op=UNLOAD Jan 16 18:07:01.895675 kernel: audit: type=1334 audit(1768586821.891:879): prog-id=105 op=UNLOAD Jan 16 18:07:01.891000 audit: BPF prog-id=105 op=UNLOAD Jan 16 18:07:01.897353 kernel: audit: type=1334 audit(1768586821.894:880): prog-id=254 op=LOAD Jan 16 18:07:01.894000 audit: BPF prog-id=254 op=LOAD Jan 16 18:07:01.894000 audit: BPF prog-id=86 op=UNLOAD Jan 16 18:07:01.899210 kernel: audit: type=1334 audit(1768586821.894:881): prog-id=86 op=UNLOAD Jan 16 18:07:01.926202 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-64985ee81b99061295ed229941ba69597e88ab78d68e33ea9a689d30ed191eba-rootfs.mount: Deactivated successfully. Jan 16 18:07:01.979467 kubelet[2847]: E0116 18:07:01.978032 2847 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:41182->10.0.0.2:2379: read: connection timed out" Jan 16 18:07:01.982256 systemd[1]: cri-containerd-43bea4ee8e5bd76728ad4f24733ecd207f43aa0a4a4fc1b9594969db6db208d2.scope: Deactivated successfully. Jan 16 18:07:01.983123 systemd[1]: cri-containerd-43bea4ee8e5bd76728ad4f24733ecd207f43aa0a4a4fc1b9594969db6db208d2.scope: Consumed 3.914s CPU time, 24.9M memory peak, 3.6M read from disk. Jan 16 18:07:01.983000 audit: BPF prog-id=255 op=LOAD Jan 16 18:07:01.986719 kernel: audit: type=1334 audit(1768586821.983:882): prog-id=255 op=LOAD Jan 16 18:07:01.986821 kernel: audit: type=1334 audit(1768586821.983:883): prog-id=91 op=UNLOAD Jan 16 18:07:01.983000 audit: BPF prog-id=91 op=UNLOAD Jan 16 18:07:01.989303 kernel: audit: type=1334 audit(1768586821.987:884): prog-id=106 op=UNLOAD Jan 16 18:07:01.989415 kernel: audit: type=1334 audit(1768586821.987:885): prog-id=110 op=UNLOAD Jan 16 18:07:01.987000 audit: BPF prog-id=106 op=UNLOAD Jan 16 18:07:01.987000 audit: BPF prog-id=110 op=UNLOAD Jan 16 18:07:01.993223 kubelet[2847]: I0116 18:07:01.993179 2847 scope.go:117] "RemoveContainer" containerID="64985ee81b99061295ed229941ba69597e88ab78d68e33ea9a689d30ed191eba" Jan 16 18:07:01.993770 containerd[1593]: time="2026-01-16T18:07:01.993736372Z" level=info msg="received container exit event container_id:\"43bea4ee8e5bd76728ad4f24733ecd207f43aa0a4a4fc1b9594969db6db208d2\" id:\"43bea4ee8e5bd76728ad4f24733ecd207f43aa0a4a4fc1b9594969db6db208d2\" pid:2698 exit_status:1 exited_at:{seconds:1768586821 nanos:992372511}" Jan 16 18:07:01.998934 kubelet[2847]: I0116 18:07:01.998870 2847 scope.go:117] "RemoveContainer" containerID="47313bec8013a16112684727cc721a46fe5f7d6bbad0d4a2375850cefee80ea5" Jan 16 18:07:02.006586 containerd[1593]: time="2026-01-16T18:07:02.006545837Z" level=info msg="CreateContainer within sandbox \"a3041452598c3679762f923a91d83ba16d120245ae65758dbf7dd67f1135865b\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 16 18:07:02.008779 containerd[1593]: time="2026-01-16T18:07:02.008353374Z" level=info msg="CreateContainer within sandbox \"2e768c5b441a9601002c70c82a11a465e180166069b3f2bb2f08bdbcc905283e\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 16 18:07:02.022906 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1875579936.mount: Deactivated successfully. Jan 16 18:07:02.025712 containerd[1593]: time="2026-01-16T18:07:02.025657586Z" level=info msg="Container 0335c37e8c323f53d65e33e60731887b3673be5996de246f399d6bba21ed4bb4: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:07:02.032471 containerd[1593]: time="2026-01-16T18:07:02.032415017Z" level=info msg="Container 52630e25da264fb9fd982277a572c547704528df9c749c222c3c04a2f061334f: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:07:02.036363 containerd[1593]: time="2026-01-16T18:07:02.036290806Z" level=info msg="CreateContainer within sandbox \"a3041452598c3679762f923a91d83ba16d120245ae65758dbf7dd67f1135865b\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"0335c37e8c323f53d65e33e60731887b3673be5996de246f399d6bba21ed4bb4\"" Jan 16 18:07:02.037631 containerd[1593]: time="2026-01-16T18:07:02.037586309Z" level=info msg="StartContainer for \"0335c37e8c323f53d65e33e60731887b3673be5996de246f399d6bba21ed4bb4\"" Jan 16 18:07:02.039421 containerd[1593]: time="2026-01-16T18:07:02.039383565Z" level=info msg="connecting to shim 0335c37e8c323f53d65e33e60731887b3673be5996de246f399d6bba21ed4bb4" address="unix:///run/containerd/s/db9174735d8d8f33c2a87d093df4979ac72d0ab4f41557706c5c59e97d875bc2" protocol=ttrpc version=3 Jan 16 18:07:02.044830 containerd[1593]: time="2026-01-16T18:07:02.044574017Z" level=info msg="CreateContainer within sandbox \"2e768c5b441a9601002c70c82a11a465e180166069b3f2bb2f08bdbcc905283e\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"52630e25da264fb9fd982277a572c547704528df9c749c222c3c04a2f061334f\"" Jan 16 18:07:02.045989 containerd[1593]: time="2026-01-16T18:07:02.045735281Z" level=info msg="StartContainer for \"52630e25da264fb9fd982277a572c547704528df9c749c222c3c04a2f061334f\"" Jan 16 18:07:02.050060 containerd[1593]: time="2026-01-16T18:07:02.050012185Z" level=info msg="connecting to shim 52630e25da264fb9fd982277a572c547704528df9c749c222c3c04a2f061334f" address="unix:///run/containerd/s/5e271199ebe9645c98906e4f359aeeb2c102fa2c6e09d7d1d5a5d0f8bf35d5cd" protocol=ttrpc version=3 Jan 16 18:07:02.073583 systemd[1]: Started cri-containerd-0335c37e8c323f53d65e33e60731887b3673be5996de246f399d6bba21ed4bb4.scope - libcontainer container 0335c37e8c323f53d65e33e60731887b3673be5996de246f399d6bba21ed4bb4. Jan 16 18:07:02.087551 systemd[1]: Started cri-containerd-52630e25da264fb9fd982277a572c547704528df9c749c222c3c04a2f061334f.scope - libcontainer container 52630e25da264fb9fd982277a572c547704528df9c749c222c3c04a2f061334f. Jan 16 18:07:02.099856 containerd[1593]: time="2026-01-16T18:07:02.099812729Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 18:07:02.107000 audit: BPF prog-id=256 op=LOAD Jan 16 18:07:02.108000 audit: BPF prog-id=257 op=LOAD Jan 16 18:07:02.108000 audit[5420]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=3040 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:02.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033333563333765386333323366353364363565333365363037333138 Jan 16 18:07:02.108000 audit: BPF prog-id=257 op=UNLOAD Jan 16 18:07:02.108000 audit[5420]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3040 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:02.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033333563333765386333323366353364363565333365363037333138 Jan 16 18:07:02.108000 audit: BPF prog-id=258 op=LOAD Jan 16 18:07:02.108000 audit[5420]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=3040 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:02.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033333563333765386333323366353364363565333365363037333138 Jan 16 18:07:02.108000 audit: BPF prog-id=259 op=LOAD Jan 16 18:07:02.108000 audit[5420]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=3040 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:02.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033333563333765386333323366353364363565333365363037333138 Jan 16 18:07:02.108000 audit: BPF prog-id=259 op=UNLOAD Jan 16 18:07:02.108000 audit[5420]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3040 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:02.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033333563333765386333323366353364363565333365363037333138 Jan 16 18:07:02.108000 audit: BPF prog-id=258 op=UNLOAD Jan 16 18:07:02.108000 audit[5420]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3040 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:02.108000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033333563333765386333323366353364363565333365363037333138 Jan 16 18:07:02.109000 audit: BPF prog-id=260 op=LOAD Jan 16 18:07:02.109000 audit[5420]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=3040 pid=5420 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:02.109000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3033333563333765386333323366353364363565333365363037333138 Jan 16 18:07:02.115000 audit: BPF prog-id=261 op=LOAD Jan 16 18:07:02.116000 audit: BPF prog-id=262 op=LOAD Jan 16 18:07:02.116000 audit[5426]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2530 pid=5426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:02.116000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532363330653235646132363466623966643938323237376135373263 Jan 16 18:07:02.117000 audit: BPF prog-id=262 op=UNLOAD Jan 16 18:07:02.117000 audit[5426]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2530 pid=5426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:02.117000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532363330653235646132363466623966643938323237376135373263 Jan 16 18:07:02.119000 audit: BPF prog-id=263 op=LOAD Jan 16 18:07:02.119000 audit[5426]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2530 pid=5426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:02.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532363330653235646132363466623966643938323237376135373263 Jan 16 18:07:02.119000 audit: BPF prog-id=264 op=LOAD Jan 16 18:07:02.119000 audit[5426]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2530 pid=5426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:02.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532363330653235646132363466623966643938323237376135373263 Jan 16 18:07:02.119000 audit: BPF prog-id=264 op=UNLOAD Jan 16 18:07:02.119000 audit[5426]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2530 pid=5426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:02.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532363330653235646132363466623966643938323237376135373263 Jan 16 18:07:02.119000 audit: BPF prog-id=263 op=UNLOAD Jan 16 18:07:02.119000 audit[5426]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2530 pid=5426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:02.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532363330653235646132363466623966643938323237376135373263 Jan 16 18:07:02.119000 audit: BPF prog-id=265 op=LOAD Jan 16 18:07:02.119000 audit[5426]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2530 pid=5426 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:02.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532363330653235646132363466623966643938323237376135373263 Jan 16 18:07:02.150059 containerd[1593]: time="2026-01-16T18:07:02.149860510Z" level=info msg="StartContainer for \"0335c37e8c323f53d65e33e60731887b3673be5996de246f399d6bba21ed4bb4\" returns successfully" Jan 16 18:07:02.171433 containerd[1593]: time="2026-01-16T18:07:02.171318908Z" level=info msg="StartContainer for \"52630e25da264fb9fd982277a572c547704528df9c749c222c3c04a2f061334f\" returns successfully" Jan 16 18:07:02.452586 containerd[1593]: time="2026-01-16T18:07:02.452407486Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:07:02.457010 containerd[1593]: time="2026-01-16T18:07:02.454899413Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 18:07:02.457222 containerd[1593]: time="2026-01-16T18:07:02.454973492Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 16 18:07:02.457282 kubelet[2847]: E0116 18:07:02.457230 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 18:07:02.458223 kubelet[2847]: E0116 18:07:02.457283 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 18:07:02.458223 kubelet[2847]: E0116 18:07:02.457432 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7kdrc,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-zmqhv_calico-system(29a43599-84fe-43df-8ccd-cc9d89aeeb36): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 18:07:02.458804 kubelet[2847]: E0116 18:07:02.458669 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-zmqhv" podUID="29a43599-84fe-43df-8ccd-cc9d89aeeb36" Jan 16 18:07:02.587678 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-43bea4ee8e5bd76728ad4f24733ecd207f43aa0a4a4fc1b9594969db6db208d2-rootfs.mount: Deactivated successfully. Jan 16 18:07:03.005373 kubelet[2847]: I0116 18:07:03.004656 2847 scope.go:117] "RemoveContainer" containerID="43bea4ee8e5bd76728ad4f24733ecd207f43aa0a4a4fc1b9594969db6db208d2" Jan 16 18:07:03.009853 containerd[1593]: time="2026-01-16T18:07:03.009815913Z" level=info msg="CreateContainer within sandbox \"0aabc2181524e22daf6a44579ffb2459d5fafd591edb1a3dd32e936e7d14575d\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 16 18:07:03.029912 containerd[1593]: time="2026-01-16T18:07:03.027786770Z" level=info msg="Container 9f520acc73fbc85d2d0b632d115cc489bf603b3266fb6f9786ffebd535d59364: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:07:03.038066 containerd[1593]: time="2026-01-16T18:07:03.038020163Z" level=info msg="CreateContainer within sandbox \"0aabc2181524e22daf6a44579ffb2459d5fafd591edb1a3dd32e936e7d14575d\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"9f520acc73fbc85d2d0b632d115cc489bf603b3266fb6f9786ffebd535d59364\"" Jan 16 18:07:03.038736 containerd[1593]: time="2026-01-16T18:07:03.038709675Z" level=info msg="StartContainer for \"9f520acc73fbc85d2d0b632d115cc489bf603b3266fb6f9786ffebd535d59364\"" Jan 16 18:07:03.040256 containerd[1593]: time="2026-01-16T18:07:03.040220816Z" level=info msg="connecting to shim 9f520acc73fbc85d2d0b632d115cc489bf603b3266fb6f9786ffebd535d59364" address="unix:///run/containerd/s/1c6bcf964c3e6a14cadfc9e16e4bb402cb7af0c550d84e0d2cc52e044330cf5e" protocol=ttrpc version=3 Jan 16 18:07:03.069178 systemd[1]: Started cri-containerd-9f520acc73fbc85d2d0b632d115cc489bf603b3266fb6f9786ffebd535d59364.scope - libcontainer container 9f520acc73fbc85d2d0b632d115cc489bf603b3266fb6f9786ffebd535d59364. Jan 16 18:07:03.082000 audit: BPF prog-id=266 op=LOAD Jan 16 18:07:03.083000 audit: BPF prog-id=267 op=LOAD Jan 16 18:07:03.083000 audit[5481]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2572 pid=5481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:03.083000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966353230616363373366626338356432643062363332643131356363 Jan 16 18:07:03.083000 audit: BPF prog-id=267 op=UNLOAD Jan 16 18:07:03.083000 audit[5481]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2572 pid=5481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:03.083000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966353230616363373366626338356432643062363332643131356363 Jan 16 18:07:03.083000 audit: BPF prog-id=268 op=LOAD Jan 16 18:07:03.083000 audit[5481]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2572 pid=5481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:03.083000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966353230616363373366626338356432643062363332643131356363 Jan 16 18:07:03.083000 audit: BPF prog-id=269 op=LOAD Jan 16 18:07:03.083000 audit[5481]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2572 pid=5481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:03.083000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966353230616363373366626338356432643062363332643131356363 Jan 16 18:07:03.083000 audit: BPF prog-id=269 op=UNLOAD Jan 16 18:07:03.083000 audit[5481]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2572 pid=5481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:03.083000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966353230616363373366626338356432643062363332643131356363 Jan 16 18:07:03.083000 audit: BPF prog-id=268 op=UNLOAD Jan 16 18:07:03.083000 audit[5481]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2572 pid=5481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:03.083000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966353230616363373366626338356432643062363332643131356363 Jan 16 18:07:03.083000 audit: BPF prog-id=270 op=LOAD Jan 16 18:07:03.083000 audit[5481]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2572 pid=5481 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:07:03.083000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3966353230616363373366626338356432643062363332643131356363 Jan 16 18:07:03.104214 containerd[1593]: time="2026-01-16T18:07:03.104168824Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 18:07:03.146544 containerd[1593]: time="2026-01-16T18:07:03.146251303Z" level=info msg="StartContainer for \"9f520acc73fbc85d2d0b632d115cc489bf603b3266fb6f9786ffebd535d59364\" returns successfully" Jan 16 18:07:03.441638 containerd[1593]: time="2026-01-16T18:07:03.441003572Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:07:03.449134 containerd[1593]: time="2026-01-16T18:07:03.448460440Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 18:07:03.449134 containerd[1593]: time="2026-01-16T18:07:03.448510879Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 18:07:03.449240 kubelet[2847]: E0116 18:07:03.448701 2847 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:07:03.449240 kubelet[2847]: E0116 18:07:03.448748 2847 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:07:03.449240 kubelet[2847]: E0116 18:07:03.448891 2847 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tt8wg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-795d7bd556-2wqnw_calico-apiserver(324c3e73-c2dc-4ae9-8875-3415b8305668): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 18:07:03.450112 kubelet[2847]: E0116 18:07:03.450072 2847 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-795d7bd556-2wqnw" podUID="324c3e73-c2dc-4ae9-8875-3415b8305668" Jan 16 18:07:05.560353 kubelet[2847]: E0116 18:07:05.560192 2847 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:40998->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{calico-kube-controllers-7995d6d9cc-4t5v5.188b482cf295fa2f calico-system 1376 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-system,Name:calico-kube-controllers-7995d6d9cc-4t5v5,UID:803aa494-41ae-4e2d-9b86-fe41697d291d,APIVersion:v1,ResourceVersion:839,FieldPath:spec.containers{calico-kube-controllers},},Reason:Pulling,Message:Pulling image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4580-0-0-p-f44e0c3b96,},FirstTimestamp:2026-01-16 18:03:56 +0000 UTC,LastTimestamp:2026-01-16 18:06:55.101726333 +0000 UTC m=+226.133646664,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4580-0-0-p-f44e0c3b96,}"