Jan 16 17:59:54.496894 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 16 17:59:54.496922 kernel: Linux version 6.12.65-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Fri Jan 16 03:04:27 -00 2026 Jan 16 17:59:54.496934 kernel: KASLR enabled Jan 16 17:59:54.496941 kernel: efi: EFI v2.7 by Ubuntu distribution of EDK II Jan 16 17:59:54.496948 kernel: efi: SMBIOS 3.0=0x139ed0000 MEMATTR=0x1390bb018 ACPI 2.0=0x136760018 RNG=0x13676e918 MEMRESERVE=0x136b41218 Jan 16 17:59:54.496954 kernel: random: crng init done Jan 16 17:59:54.496962 kernel: secureboot: Secure boot disabled Jan 16 17:59:54.496969 kernel: ACPI: Early table checksum verification disabled Jan 16 17:59:54.496976 kernel: ACPI: RSDP 0x0000000136760018 000024 (v02 BOCHS ) Jan 16 17:59:54.496984 kernel: ACPI: XSDT 0x000000013676FE98 00006C (v01 BOCHS BXPC 00000001 01000013) Jan 16 17:59:54.496992 kernel: ACPI: FACP 0x000000013676FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 17:59:54.497000 kernel: ACPI: DSDT 0x0000000136767518 001468 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 17:59:54.497007 kernel: ACPI: APIC 0x000000013676FC18 000108 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 17:59:54.497014 kernel: ACPI: PPTT 0x000000013676FD98 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 17:59:54.497024 kernel: ACPI: GTDT 0x000000013676D898 000060 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 17:59:54.497031 kernel: ACPI: MCFG 0x000000013676FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 17:59:54.497039 kernel: ACPI: SPCR 0x000000013676E818 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 17:59:54.497047 kernel: ACPI: DBG2 0x000000013676E898 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 17:59:54.497054 kernel: ACPI: IORT 0x000000013676E418 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 16 17:59:54.497061 kernel: ACPI: BGRT 0x000000013676E798 000038 (v01 INTEL EDK2 00000002 01000013) Jan 16 17:59:54.497067 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 16 17:59:54.497074 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 16 17:59:54.497080 kernel: NUMA: Faking a node at [mem 0x0000000040000000-0x0000000139ffffff] Jan 16 17:59:54.497088 kernel: NODE_DATA(0) allocated [mem 0x13967da00-0x139684fff] Jan 16 17:59:54.497095 kernel: Zone ranges: Jan 16 17:59:54.497101 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 16 17:59:54.497108 kernel: DMA32 empty Jan 16 17:59:54.497114 kernel: Normal [mem 0x0000000100000000-0x0000000139ffffff] Jan 16 17:59:54.497121 kernel: Device empty Jan 16 17:59:54.497127 kernel: Movable zone start for each node Jan 16 17:59:54.497133 kernel: Early memory node ranges Jan 16 17:59:54.497140 kernel: node 0: [mem 0x0000000040000000-0x000000013666ffff] Jan 16 17:59:54.497146 kernel: node 0: [mem 0x0000000136670000-0x000000013667ffff] Jan 16 17:59:54.497153 kernel: node 0: [mem 0x0000000136680000-0x000000013676ffff] Jan 16 17:59:54.497159 kernel: node 0: [mem 0x0000000136770000-0x0000000136b3ffff] Jan 16 17:59:54.497170 kernel: node 0: [mem 0x0000000136b40000-0x0000000139e1ffff] Jan 16 17:59:54.497178 kernel: node 0: [mem 0x0000000139e20000-0x0000000139eaffff] Jan 16 17:59:54.497185 kernel: node 0: [mem 0x0000000139eb0000-0x0000000139ebffff] Jan 16 17:59:54.497193 kernel: node 0: [mem 0x0000000139ec0000-0x0000000139fdffff] Jan 16 17:59:54.497200 kernel: node 0: [mem 0x0000000139fe0000-0x0000000139ffffff] Jan 16 17:59:54.497211 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x0000000139ffffff] Jan 16 17:59:54.497221 kernel: On node 0, zone Normal: 24576 pages in unavailable ranges Jan 16 17:59:54.497229 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Jan 16 17:59:54.497237 kernel: psci: probing for conduit method from ACPI. Jan 16 17:59:54.497245 kernel: psci: PSCIv1.1 detected in firmware. Jan 16 17:59:54.497252 kernel: psci: Using standard PSCI v0.2 function IDs Jan 16 17:59:54.497260 kernel: psci: Trusted OS migration not required Jan 16 17:59:54.497267 kernel: psci: SMC Calling Convention v1.1 Jan 16 17:59:54.497275 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 16 17:59:54.497285 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 16 17:59:54.497292 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 16 17:59:54.497302 kernel: pcpu-alloc: [0] 0 [0] 1 Jan 16 17:59:54.497310 kernel: Detected PIPT I-cache on CPU0 Jan 16 17:59:54.497318 kernel: CPU features: detected: GIC system register CPU interface Jan 16 17:59:54.497325 kernel: CPU features: detected: Spectre-v4 Jan 16 17:59:54.497333 kernel: CPU features: detected: Spectre-BHB Jan 16 17:59:54.497341 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 16 17:59:54.497348 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 16 17:59:54.497357 kernel: CPU features: detected: ARM erratum 1418040 Jan 16 17:59:54.497365 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 16 17:59:54.497375 kernel: alternatives: applying boot alternatives Jan 16 17:59:54.497384 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=924fb3eb04ba1d8edcb66284d30e3342855b0579b62556e7722bcf37e82bda13 Jan 16 17:59:54.497392 kernel: Dentry cache hash table entries: 524288 (order: 10, 4194304 bytes, linear) Jan 16 17:59:54.497400 kernel: Inode-cache hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 16 17:59:54.497408 kernel: Fallback order for Node 0: 0 Jan 16 17:59:54.497415 kernel: Built 1 zonelists, mobility grouping on. Total pages: 1024000 Jan 16 17:59:54.497422 kernel: Policy zone: Normal Jan 16 17:59:54.497429 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 16 17:59:54.497437 kernel: software IO TLB: area num 2. Jan 16 17:59:54.497443 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Jan 16 17:59:54.497452 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=2, Nodes=1 Jan 16 17:59:54.497459 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 16 17:59:54.497467 kernel: rcu: RCU event tracing is enabled. Jan 16 17:59:54.497474 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=2. Jan 16 17:59:54.497484 kernel: Trampoline variant of Tasks RCU enabled. Jan 16 17:59:54.497491 kernel: Tracing variant of Tasks RCU enabled. Jan 16 17:59:54.497498 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 16 17:59:54.497505 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=2 Jan 16 17:59:54.497512 kernel: RCU Tasks: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 17:59:54.497519 kernel: RCU Tasks Trace: Setting shift to 1 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=2. Jan 16 17:59:54.497526 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 16 17:59:54.497535 kernel: GICv3: 256 SPIs implemented Jan 16 17:59:54.497541 kernel: GICv3: 0 Extended SPIs implemented Jan 16 17:59:54.497548 kernel: Root IRQ handler: gic_handle_irq Jan 16 17:59:54.497555 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 16 17:59:54.497562 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 16 17:59:54.497569 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 16 17:59:54.497576 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 16 17:59:54.497583 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100100000 (indirect, esz 8, psz 64K, shr 1) Jan 16 17:59:54.497590 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100110000 (flat, esz 8, psz 64K, shr 1) Jan 16 17:59:54.497597 kernel: GICv3: using LPI property table @0x0000000100120000 Jan 16 17:59:54.497604 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100130000 Jan 16 17:59:54.497612 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 16 17:59:54.497844 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 16 17:59:54.497862 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 16 17:59:54.497870 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 16 17:59:54.498007 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 16 17:59:54.498019 kernel: Console: colour dummy device 80x25 Jan 16 17:59:54.498027 kernel: ACPI: Core revision 20240827 Jan 16 17:59:54.498035 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 16 17:59:54.498043 kernel: pid_max: default: 32768 minimum: 301 Jan 16 17:59:54.498055 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 16 17:59:54.498063 kernel: landlock: Up and running. Jan 16 17:59:54.498070 kernel: SELinux: Initializing. Jan 16 17:59:54.498078 kernel: Mount-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 16 17:59:54.498085 kernel: Mountpoint-cache hash table entries: 8192 (order: 4, 65536 bytes, linear) Jan 16 17:59:54.498093 kernel: rcu: Hierarchical SRCU implementation. Jan 16 17:59:54.498132 kernel: rcu: Max phase no-delay instances is 400. Jan 16 17:59:54.498142 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 16 17:59:54.498152 kernel: Remapping and enabling EFI services. Jan 16 17:59:54.498160 kernel: smp: Bringing up secondary CPUs ... Jan 16 17:59:54.498167 kernel: Detected PIPT I-cache on CPU1 Jan 16 17:59:54.498175 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 16 17:59:54.498182 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100140000 Jan 16 17:59:54.498190 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 16 17:59:54.498197 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 16 17:59:54.498206 kernel: smp: Brought up 1 node, 2 CPUs Jan 16 17:59:54.498214 kernel: SMP: Total of 2 processors activated. Jan 16 17:59:54.498226 kernel: CPU: All CPU(s) started at EL1 Jan 16 17:59:54.498235 kernel: CPU features: detected: 32-bit EL0 Support Jan 16 17:59:54.498243 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 16 17:59:54.498251 kernel: CPU features: detected: Common not Private translations Jan 16 17:59:54.498258 kernel: CPU features: detected: CRC32 instructions Jan 16 17:59:54.498266 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 16 17:59:54.498276 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 16 17:59:54.498283 kernel: CPU features: detected: LSE atomic instructions Jan 16 17:59:54.498291 kernel: CPU features: detected: Privileged Access Never Jan 16 17:59:54.498299 kernel: CPU features: detected: RAS Extension Support Jan 16 17:59:54.498306 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 16 17:59:54.498316 kernel: alternatives: applying system-wide alternatives Jan 16 17:59:54.498323 kernel: CPU features: detected: Hardware dirty bit management on CPU0-1 Jan 16 17:59:54.498332 kernel: Memory: 3885860K/4096000K available (11200K kernel code, 2458K rwdata, 9092K rodata, 12480K init, 1038K bss, 188660K reserved, 16384K cma-reserved) Jan 16 17:59:54.498340 kernel: devtmpfs: initialized Jan 16 17:59:54.498349 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 16 17:59:54.498356 kernel: futex hash table entries: 512 (order: 3, 32768 bytes, linear) Jan 16 17:59:54.498364 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 16 17:59:54.498373 kernel: 0 pages in range for non-PLT usage Jan 16 17:59:54.498381 kernel: 515152 pages in range for PLT usage Jan 16 17:59:54.498389 kernel: pinctrl core: initialized pinctrl subsystem Jan 16 17:59:54.498396 kernel: SMBIOS 3.0.0 present. Jan 16 17:59:54.498404 kernel: DMI: Hetzner vServer/KVM Virtual Machine, BIOS 20171111 11/11/2017 Jan 16 17:59:54.498412 kernel: DMI: Memory slots populated: 1/1 Jan 16 17:59:54.498420 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 16 17:59:54.498428 kernel: DMA: preallocated 512 KiB GFP_KERNEL pool for atomic allocations Jan 16 17:59:54.498437 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 16 17:59:54.498445 kernel: DMA: preallocated 512 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 16 17:59:54.498452 kernel: audit: initializing netlink subsys (disabled) Jan 16 17:59:54.498460 kernel: audit: type=2000 audit(0.018:1): state=initialized audit_enabled=0 res=1 Jan 16 17:59:54.498468 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 16 17:59:54.498476 kernel: cpuidle: using governor menu Jan 16 17:59:54.498483 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 16 17:59:54.498493 kernel: ASID allocator initialised with 32768 entries Jan 16 17:59:54.498500 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 16 17:59:54.498508 kernel: Serial: AMBA PL011 UART driver Jan 16 17:59:54.498516 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 16 17:59:54.498523 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 16 17:59:54.498531 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 16 17:59:54.498539 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 16 17:59:54.498548 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 16 17:59:54.498556 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 16 17:59:54.498564 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 16 17:59:54.498584 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 16 17:59:54.498594 kernel: ACPI: Added _OSI(Module Device) Jan 16 17:59:54.498602 kernel: ACPI: Added _OSI(Processor Device) Jan 16 17:59:54.498610 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 16 17:59:54.500669 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 16 17:59:54.500694 kernel: ACPI: Interpreter enabled Jan 16 17:59:54.500703 kernel: ACPI: Using GIC for interrupt routing Jan 16 17:59:54.500757 kernel: ACPI: MCFG table detected, 1 entries Jan 16 17:59:54.500766 kernel: ACPI: CPU0 has been hot-added Jan 16 17:59:54.500774 kernel: ACPI: CPU1 has been hot-added Jan 16 17:59:54.500783 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 16 17:59:54.500792 kernel: printk: legacy console [ttyAMA0] enabled Jan 16 17:59:54.500807 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 16 17:59:54.500990 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 16 17:59:54.501079 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 16 17:59:54.501160 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 16 17:59:54.501241 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 16 17:59:54.501327 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 16 17:59:54.501338 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 16 17:59:54.501346 kernel: PCI host bridge to bus 0000:00 Jan 16 17:59:54.501440 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 16 17:59:54.501514 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 16 17:59:54.501588 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 16 17:59:54.501693 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 16 17:59:54.501823 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jan 16 17:59:54.501920 kernel: pci 0000:00:01.0: [1af4:1050] type 00 class 0x038000 conventional PCI endpoint Jan 16 17:59:54.502013 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11289000-0x11289fff] Jan 16 17:59:54.502097 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref] Jan 16 17:59:54.502271 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 16 17:59:54.502370 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11288000-0x11288fff] Jan 16 17:59:54.502453 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 16 17:59:54.502573 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Jan 16 17:59:54.503440 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Jan 16 17:59:54.504020 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 16 17:59:54.504213 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11287000-0x11287fff] Jan 16 17:59:54.504303 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 16 17:59:54.504385 kernel: pci 0000:00:02.1: bridge window [mem 0x10e00000-0x10ffffff] Jan 16 17:59:54.504478 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 16 17:59:54.504563 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11286000-0x11286fff] Jan 16 17:59:54.504667 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 16 17:59:54.504784 kernel: pci 0000:00:02.2: bridge window [mem 0x10c00000-0x10dfffff] Jan 16 17:59:54.504869 kernel: pci 0000:00:02.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Jan 16 17:59:54.504961 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 16 17:59:54.505041 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11285000-0x11285fff] Jan 16 17:59:54.505122 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 16 17:59:54.505200 kernel: pci 0000:00:02.3: bridge window [mem 0x10a00000-0x10bfffff] Jan 16 17:59:54.505283 kernel: pci 0000:00:02.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Jan 16 17:59:54.505372 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 16 17:59:54.505453 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11284000-0x11284fff] Jan 16 17:59:54.505532 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 16 17:59:54.505820 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 16 17:59:54.505918 kernel: pci 0000:00:02.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Jan 16 17:59:54.506024 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 16 17:59:54.506108 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11283000-0x11283fff] Jan 16 17:59:54.506199 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 16 17:59:54.506292 kernel: pci 0000:00:02.5: bridge window [mem 0x10600000-0x107fffff] Jan 16 17:59:54.506384 kernel: pci 0000:00:02.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Jan 16 17:59:54.506485 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 16 17:59:54.506581 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11282000-0x11282fff] Jan 16 17:59:54.506726 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 16 17:59:54.506833 kernel: pci 0000:00:02.6: bridge window [mem 0x10400000-0x105fffff] Jan 16 17:59:54.506931 kernel: pci 0000:00:02.6: bridge window [mem 0x8000500000-0x80005fffff 64bit pref] Jan 16 17:59:54.507102 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 16 17:59:54.507193 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11281000-0x11281fff] Jan 16 17:59:54.507278 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 16 17:59:54.507388 kernel: pci 0000:00:02.7: bridge window [mem 0x10200000-0x103fffff] Jan 16 17:59:54.507487 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 16 17:59:54.507568 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11280000-0x11280fff] Jan 16 17:59:54.507685 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 16 17:59:54.507830 kernel: pci 0000:00:03.0: bridge window [mem 0x10000000-0x101fffff] Jan 16 17:59:54.507958 kernel: pci 0000:00:04.0: [1b36:0002] type 00 class 0x070002 conventional PCI endpoint Jan 16 17:59:54.508064 kernel: pci 0000:00:04.0: BAR 0 [io 0x0000-0x0007] Jan 16 17:59:54.508160 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 16 17:59:54.508246 kernel: pci 0000:01:00.0: BAR 1 [mem 0x11000000-0x11000fff] Jan 16 17:59:54.508335 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jan 16 17:59:54.508418 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 16 17:59:54.508512 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 16 17:59:54.508609 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10e00000-0x10e03fff 64bit] Jan 16 17:59:54.508761 kernel: pci 0000:03:00.0: [1af4:1043] type 00 class 0x078000 PCIe Endpoint Jan 16 17:59:54.508851 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10c00000-0x10c00fff] Jan 16 17:59:54.508959 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Jan 16 17:59:54.509062 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Jan 16 17:59:54.509147 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Jan 16 17:59:54.509291 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 16 17:59:54.509380 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff] Jan 16 17:59:54.509467 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Jan 16 17:59:54.509557 kernel: pci 0000:06:00.0: [1af4:1048] type 00 class 0x010000 PCIe Endpoint Jan 16 17:59:54.510354 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10600000-0x10600fff] Jan 16 17:59:54.510481 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Jan 16 17:59:54.510581 kernel: pci 0000:07:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 16 17:59:54.510761 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10400000-0x10400fff] Jan 16 17:59:54.510889 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000500000-0x8000503fff 64bit pref] Jan 16 17:59:54.510981 kernel: pci 0000:07:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 16 17:59:54.511138 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 16 17:59:54.511226 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 16 17:59:54.511306 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 16 17:59:54.511427 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 16 17:59:54.511523 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 16 17:59:54.511607 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 16 17:59:54.511815 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 16 17:59:54.511913 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 16 17:59:54.511993 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 16 17:59:54.512085 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 16 17:59:54.512166 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 16 17:59:54.512263 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 16 17:59:54.512355 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 16 17:59:54.512437 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 16 17:59:54.512523 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 16 17:59:54.512666 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 16 17:59:54.512790 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 16 17:59:54.512874 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 16 17:59:54.512961 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 16 17:59:54.513046 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 07] add_size 100000 add_align 100000 Jan 16 17:59:54.513144 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x001fffff] to [bus 07] add_size 100000 add_align 100000 Jan 16 17:59:54.513233 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 16 17:59:54.513364 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 16 17:59:54.513456 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 16 17:59:54.513541 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 16 17:59:54.513702 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 16 17:59:54.513860 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 16 17:59:54.513948 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff]: assigned Jan 16 17:59:54.514053 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Jan 16 17:59:54.514327 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff]: assigned Jan 16 17:59:54.514421 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Jan 16 17:59:54.515698 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff]: assigned Jan 16 17:59:54.515882 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Jan 16 17:59:54.515976 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff]: assigned Jan 16 17:59:54.516059 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Jan 16 17:59:54.516164 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff]: assigned Jan 16 17:59:54.516253 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Jan 16 17:59:54.516336 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Jan 16 17:59:54.516417 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Jan 16 17:59:54.516507 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Jan 16 17:59:54.516587 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Jan 16 17:59:54.516756 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Jan 16 17:59:54.516847 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Jan 16 17:59:54.516939 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff]: assigned Jan 16 17:59:54.517040 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Jan 16 17:59:54.517138 kernel: pci 0000:00:01.0: BAR 4 [mem 0x8001200000-0x8001203fff 64bit pref]: assigned Jan 16 17:59:54.517220 kernel: pci 0000:00:01.0: BAR 1 [mem 0x11200000-0x11200fff]: assigned Jan 16 17:59:54.517307 kernel: pci 0000:00:02.0: BAR 0 [mem 0x11201000-0x11201fff]: assigned Jan 16 17:59:54.517387 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff]: assigned Jan 16 17:59:54.517469 kernel: pci 0000:00:02.1: BAR 0 [mem 0x11202000-0x11202fff]: assigned Jan 16 17:59:54.517552 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff]: assigned Jan 16 17:59:54.517647 kernel: pci 0000:00:02.2: BAR 0 [mem 0x11203000-0x11203fff]: assigned Jan 16 17:59:54.517746 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff]: assigned Jan 16 17:59:54.517833 kernel: pci 0000:00:02.3: BAR 0 [mem 0x11204000-0x11204fff]: assigned Jan 16 17:59:54.517914 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff]: assigned Jan 16 17:59:54.518019 kernel: pci 0000:00:02.4: BAR 0 [mem 0x11205000-0x11205fff]: assigned Jan 16 17:59:54.518101 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff]: assigned Jan 16 17:59:54.518183 kernel: pci 0000:00:02.5: BAR 0 [mem 0x11206000-0x11206fff]: assigned Jan 16 17:59:54.518263 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff]: assigned Jan 16 17:59:54.518346 kernel: pci 0000:00:02.6: BAR 0 [mem 0x11207000-0x11207fff]: assigned Jan 16 17:59:54.518501 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff]: assigned Jan 16 17:59:54.518596 kernel: pci 0000:00:02.7: BAR 0 [mem 0x11208000-0x11208fff]: assigned Jan 16 17:59:54.518695 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff]: assigned Jan 16 17:59:54.520895 kernel: pci 0000:00:03.0: BAR 0 [mem 0x11209000-0x11209fff]: assigned Jan 16 17:59:54.521012 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff]: assigned Jan 16 17:59:54.521104 kernel: pci 0000:00:04.0: BAR 0 [io 0xa000-0xa007]: assigned Jan 16 17:59:54.521194 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Jan 16 17:59:54.521276 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jan 16 17:59:54.521423 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Jan 16 17:59:54.521515 kernel: pci 0000:00:02.0: PCI bridge to [bus 01] Jan 16 17:59:54.521679 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x1fff] Jan 16 17:59:54.521793 kernel: pci 0000:00:02.0: bridge window [mem 0x10000000-0x101fffff] Jan 16 17:59:54.521936 kernel: pci 0000:00:02.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 16 17:59:54.522051 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Jan 16 17:59:54.522151 kernel: pci 0000:00:02.1: PCI bridge to [bus 02] Jan 16 17:59:54.522242 kernel: pci 0000:00:02.1: bridge window [io 0x2000-0x2fff] Jan 16 17:59:54.522339 kernel: pci 0000:00:02.1: bridge window [mem 0x10200000-0x103fffff] Jan 16 17:59:54.522487 kernel: pci 0000:00:02.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 16 17:59:54.522613 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Jan 16 17:59:54.526883 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Jan 16 17:59:54.526979 kernel: pci 0000:00:02.2: PCI bridge to [bus 03] Jan 16 17:59:54.527062 kernel: pci 0000:00:02.2: bridge window [io 0x3000-0x3fff] Jan 16 17:59:54.527145 kernel: pci 0000:00:02.2: bridge window [mem 0x10400000-0x105fffff] Jan 16 17:59:54.527225 kernel: pci 0000:00:02.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 16 17:59:54.527317 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Jan 16 17:59:54.527406 kernel: pci 0000:00:02.3: PCI bridge to [bus 04] Jan 16 17:59:54.527487 kernel: pci 0000:00:02.3: bridge window [io 0x4000-0x4fff] Jan 16 17:59:54.527567 kernel: pci 0000:00:02.3: bridge window [mem 0x10600000-0x107fffff] Jan 16 17:59:54.527675 kernel: pci 0000:00:02.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 16 17:59:54.527800 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Jan 16 17:59:54.527887 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Jan 16 17:59:54.527977 kernel: pci 0000:00:02.4: PCI bridge to [bus 05] Jan 16 17:59:54.528059 kernel: pci 0000:00:02.4: bridge window [io 0x5000-0x5fff] Jan 16 17:59:54.528138 kernel: pci 0000:00:02.4: bridge window [mem 0x10800000-0x109fffff] Jan 16 17:59:54.528217 kernel: pci 0000:00:02.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 16 17:59:54.528305 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Jan 16 17:59:54.528389 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Jan 16 17:59:54.528473 kernel: pci 0000:00:02.5: PCI bridge to [bus 06] Jan 16 17:59:54.530971 kernel: pci 0000:00:02.5: bridge window [io 0x6000-0x6fff] Jan 16 17:59:54.531097 kernel: pci 0000:00:02.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 16 17:59:54.531180 kernel: pci 0000:00:02.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 16 17:59:54.531293 kernel: pci 0000:07:00.0: ROM [mem 0x10c00000-0x10c7ffff pref]: assigned Jan 16 17:59:54.531388 kernel: pci 0000:07:00.0: BAR 4 [mem 0x8000c00000-0x8000c03fff 64bit pref]: assigned Jan 16 17:59:54.531476 kernel: pci 0000:07:00.0: BAR 1 [mem 0x10c80000-0x10c80fff]: assigned Jan 16 17:59:54.531559 kernel: pci 0000:00:02.6: PCI bridge to [bus 07] Jan 16 17:59:54.531760 kernel: pci 0000:00:02.6: bridge window [io 0x7000-0x7fff] Jan 16 17:59:54.531887 kernel: pci 0000:00:02.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 16 17:59:54.531972 kernel: pci 0000:00:02.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 16 17:59:54.532057 kernel: pci 0000:00:02.7: PCI bridge to [bus 08] Jan 16 17:59:54.532140 kernel: pci 0000:00:02.7: bridge window [io 0x8000-0x8fff] Jan 16 17:59:54.532282 kernel: pci 0000:00:02.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 16 17:59:54.532378 kernel: pci 0000:00:02.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 16 17:59:54.532468 kernel: pci 0000:00:03.0: PCI bridge to [bus 09] Jan 16 17:59:54.532550 kernel: pci 0000:00:03.0: bridge window [io 0x9000-0x9fff] Jan 16 17:59:54.537345 kernel: pci 0000:00:03.0: bridge window [mem 0x11000000-0x111fffff] Jan 16 17:59:54.537494 kernel: pci 0000:00:03.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 16 17:59:54.537578 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 16 17:59:54.537683 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 16 17:59:54.537878 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 16 17:59:54.537977 kernel: pci_bus 0000:01: resource 0 [io 0x1000-0x1fff] Jan 16 17:59:54.538055 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 16 17:59:54.538138 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 16 17:59:54.538225 kernel: pci_bus 0000:02: resource 0 [io 0x2000-0x2fff] Jan 16 17:59:54.538330 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 16 17:59:54.538413 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 16 17:59:54.538499 kernel: pci_bus 0000:03: resource 0 [io 0x3000-0x3fff] Jan 16 17:59:54.538575 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 16 17:59:54.538677 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 16 17:59:54.538792 kernel: pci_bus 0000:04: resource 0 [io 0x4000-0x4fff] Jan 16 17:59:54.538870 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 16 17:59:54.538944 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 16 17:59:54.539027 kernel: pci_bus 0000:05: resource 0 [io 0x5000-0x5fff] Jan 16 17:59:54.539101 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 16 17:59:54.539181 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 16 17:59:54.539267 kernel: pci_bus 0000:06: resource 0 [io 0x6000-0x6fff] Jan 16 17:59:54.539351 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 16 17:59:54.539437 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 16 17:59:54.539528 kernel: pci_bus 0000:07: resource 0 [io 0x7000-0x7fff] Jan 16 17:59:54.539612 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 16 17:59:54.542136 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 16 17:59:54.542234 kernel: pci_bus 0000:08: resource 0 [io 0x8000-0x8fff] Jan 16 17:59:54.542314 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 16 17:59:54.542393 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 16 17:59:54.542476 kernel: pci_bus 0000:09: resource 0 [io 0x9000-0x9fff] Jan 16 17:59:54.542571 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 16 17:59:54.542783 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 16 17:59:54.542801 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 16 17:59:54.542810 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 16 17:59:54.542819 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 16 17:59:54.542827 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 16 17:59:54.542835 kernel: iommu: Default domain type: Translated Jan 16 17:59:54.542844 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 16 17:59:54.542857 kernel: efivars: Registered efivars operations Jan 16 17:59:54.542865 kernel: vgaarb: loaded Jan 16 17:59:54.542873 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 16 17:59:54.542881 kernel: VFS: Disk quotas dquot_6.6.0 Jan 16 17:59:54.542890 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 16 17:59:54.542898 kernel: pnp: PnP ACPI init Jan 16 17:59:54.543007 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 16 17:59:54.543022 kernel: pnp: PnP ACPI: found 1 devices Jan 16 17:59:54.543033 kernel: NET: Registered PF_INET protocol family Jan 16 17:59:54.543041 kernel: IP idents hash table entries: 65536 (order: 7, 524288 bytes, linear) Jan 16 17:59:54.543050 kernel: tcp_listen_portaddr_hash hash table entries: 2048 (order: 3, 32768 bytes, linear) Jan 16 17:59:54.543059 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 16 17:59:54.543067 kernel: TCP established hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 16 17:59:54.543075 kernel: TCP bind hash table entries: 32768 (order: 8, 1048576 bytes, linear) Jan 16 17:59:54.543087 kernel: TCP: Hash tables configured (established 32768 bind 32768) Jan 16 17:59:54.543096 kernel: UDP hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 16 17:59:54.543105 kernel: UDP-Lite hash table entries: 2048 (order: 4, 65536 bytes, linear) Jan 16 17:59:54.543114 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 16 17:59:54.543238 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 16 17:59:54.543251 kernel: PCI: CLS 0 bytes, default 64 Jan 16 17:59:54.543260 kernel: kvm [1]: HYP mode not available Jan 16 17:59:54.543270 kernel: Initialise system trusted keyrings Jan 16 17:59:54.543279 kernel: workingset: timestamp_bits=39 max_order=20 bucket_order=0 Jan 16 17:59:54.543287 kernel: Key type asymmetric registered Jan 16 17:59:54.543297 kernel: Asymmetric key parser 'x509' registered Jan 16 17:59:54.543306 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 16 17:59:54.543314 kernel: io scheduler mq-deadline registered Jan 16 17:59:54.543323 kernel: io scheduler kyber registered Jan 16 17:59:54.543333 kernel: io scheduler bfq registered Jan 16 17:59:54.543342 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 16 17:59:54.543433 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 50 Jan 16 17:59:54.543528 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 50 Jan 16 17:59:54.546003 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 17:59:54.546207 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 51 Jan 16 17:59:54.546354 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 51 Jan 16 17:59:54.546453 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 17:59:54.546541 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 52 Jan 16 17:59:54.546636 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 52 Jan 16 17:59:54.546745 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 17:59:54.546895 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 53 Jan 16 17:59:54.546985 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 53 Jan 16 17:59:54.547074 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 17:59:54.547161 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 54 Jan 16 17:59:54.547262 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 54 Jan 16 17:59:54.547351 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 17:59:54.547456 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 55 Jan 16 17:59:54.547546 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 55 Jan 16 17:59:54.547668 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 17:59:54.547827 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 56 Jan 16 17:59:54.547914 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 56 Jan 16 17:59:54.547996 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 17:59:54.548081 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 57 Jan 16 17:59:54.548165 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 57 Jan 16 17:59:54.548251 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 17:59:54.548263 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 16 17:59:54.548347 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 58 Jan 16 17:59:54.548427 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 58 Jan 16 17:59:54.548506 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 16 17:59:54.548530 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 16 17:59:54.548540 kernel: ACPI: button: Power Button [PWRB] Jan 16 17:59:54.548551 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 16 17:59:54.548736 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 16 17:59:54.548859 kernel: virtio-pci 0000:07:00.0: enabling device (0000 -> 0002) Jan 16 17:59:54.548872 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 16 17:59:54.548881 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 16 17:59:54.548969 kernel: serial 0000:00:04.0: enabling device (0000 -> 0001) Jan 16 17:59:54.548981 kernel: 0000:00:04.0: ttyS0 at I/O 0xa000 (irq = 45, base_baud = 115200) is a 16550A Jan 16 17:59:54.548993 kernel: thunder_xcv, ver 1.0 Jan 16 17:59:54.549002 kernel: thunder_bgx, ver 1.0 Jan 16 17:59:54.549009 kernel: nicpf, ver 1.0 Jan 16 17:59:54.549018 kernel: nicvf, ver 1.0 Jan 16 17:59:54.549195 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 16 17:59:54.549285 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-16T17:59:53 UTC (1768586393) Jan 16 17:59:54.549299 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 16 17:59:54.549308 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 16 17:59:54.549318 kernel: watchdog: NMI not fully supported Jan 16 17:59:54.549326 kernel: watchdog: Hard watchdog permanently disabled Jan 16 17:59:54.549334 kernel: NET: Registered PF_INET6 protocol family Jan 16 17:59:54.549342 kernel: Segment Routing with IPv6 Jan 16 17:59:54.549350 kernel: In-situ OAM (IOAM) with IPv6 Jan 16 17:59:54.549360 kernel: NET: Registered PF_PACKET protocol family Jan 16 17:59:54.549369 kernel: Key type dns_resolver registered Jan 16 17:59:54.549377 kernel: registered taskstats version 1 Jan 16 17:59:54.549385 kernel: Loading compiled-in X.509 certificates Jan 16 17:59:54.549393 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.65-flatcar: 27e3aa638f3535434dc9dbdde4239fca944d5458' Jan 16 17:59:54.549402 kernel: Demotion targets for Node 0: null Jan 16 17:59:54.549410 kernel: Key type .fscrypt registered Jan 16 17:59:54.549418 kernel: Key type fscrypt-provisioning registered Jan 16 17:59:54.549428 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 16 17:59:54.549436 kernel: ima: Allocated hash algorithm: sha1 Jan 16 17:59:54.549444 kernel: ima: No architecture policies found Jan 16 17:59:54.549453 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 16 17:59:54.549461 kernel: clk: Disabling unused clocks Jan 16 17:59:54.549469 kernel: PM: genpd: Disabling unused power domains Jan 16 17:59:54.549477 kernel: Freeing unused kernel memory: 12480K Jan 16 17:59:54.549488 kernel: Run /init as init process Jan 16 17:59:54.549496 kernel: with arguments: Jan 16 17:59:54.549504 kernel: /init Jan 16 17:59:54.549512 kernel: with environment: Jan 16 17:59:54.549520 kernel: HOME=/ Jan 16 17:59:54.549527 kernel: TERM=linux Jan 16 17:59:54.549535 kernel: ACPI: bus type USB registered Jan 16 17:59:54.549545 kernel: usbcore: registered new interface driver usbfs Jan 16 17:59:54.549553 kernel: usbcore: registered new interface driver hub Jan 16 17:59:54.549562 kernel: usbcore: registered new device driver usb Jan 16 17:59:54.550770 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 16 17:59:54.550933 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 16 17:59:54.551043 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 16 17:59:54.551130 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 16 17:59:54.551258 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 16 17:59:54.551348 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 16 17:59:54.551491 kernel: hub 1-0:1.0: USB hub found Jan 16 17:59:54.551585 kernel: hub 1-0:1.0: 4 ports detected Jan 16 17:59:54.551742 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 16 17:59:54.551882 kernel: hub 2-0:1.0: USB hub found Jan 16 17:59:54.551976 kernel: hub 2-0:1.0: 4 ports detected Jan 16 17:59:54.551986 kernel: SCSI subsystem initialized Jan 16 17:59:54.552082 kernel: virtio_scsi virtio5: 2/0/0 default/read/poll queues Jan 16 17:59:54.552188 kernel: scsi host0: Virtio SCSI HBA Jan 16 17:59:54.552297 kernel: scsi 0:0:0:0: CD-ROM QEMU QEMU CD-ROM 2.5+ PQ: 0 ANSI: 5 Jan 16 17:59:54.552396 kernel: scsi 0:0:0:1: Direct-Access QEMU QEMU HARDDISK 2.5+ PQ: 0 ANSI: 5 Jan 16 17:59:54.552486 kernel: sd 0:0:0:1: Power-on or device reset occurred Jan 16 17:59:54.552577 kernel: sd 0:0:0:1: [sda] 80003072 512-byte logical blocks: (41.0 GB/38.1 GiB) Jan 16 17:59:54.553816 kernel: sd 0:0:0:1: [sda] Write Protect is off Jan 16 17:59:54.553929 kernel: sd 0:0:0:1: [sda] Mode Sense: 63 00 00 08 Jan 16 17:59:54.554026 kernel: sd 0:0:0:1: [sda] Write cache: enabled, read cache: enabled, doesn't support DPO or FUA Jan 16 17:59:54.554038 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 16 17:59:54.554047 kernel: GPT:25804799 != 80003071 Jan 16 17:59:54.554055 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 16 17:59:54.554063 kernel: GPT:25804799 != 80003071 Jan 16 17:59:54.554071 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 16 17:59:54.554081 kernel: sda: sda1 sda2 sda3 sda4 sda6 sda7 sda9 Jan 16 17:59:54.554168 kernel: sd 0:0:0:1: [sda] Attached SCSI disk Jan 16 17:59:54.554260 kernel: sr 0:0:0:0: Power-on or device reset occurred Jan 16 17:59:54.554347 kernel: sr 0:0:0:0: [sr0] scsi3-mmc drive: 16x/50x cd/rw xa/form2 cdda tray Jan 16 17:59:54.554358 kernel: cdrom: Uniform CD-ROM driver Revision: 3.20 Jan 16 17:59:54.554443 kernel: sr 0:0:0:0: Attached scsi CD-ROM sr0 Jan 16 17:59:54.554454 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 16 17:59:54.554464 kernel: device-mapper: uevent: version 1.0.3 Jan 16 17:59:54.554472 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 16 17:59:54.554481 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 16 17:59:54.554489 kernel: raid6: neonx8 gen() 15689 MB/s Jan 16 17:59:54.554497 kernel: raid6: neonx4 gen() 12045 MB/s Jan 16 17:59:54.554505 kernel: raid6: neonx2 gen() 12892 MB/s Jan 16 17:59:54.554513 kernel: raid6: neonx1 gen() 10258 MB/s Jan 16 17:59:54.554523 kernel: raid6: int64x8 gen() 6654 MB/s Jan 16 17:59:54.554532 kernel: raid6: int64x4 gen() 7185 MB/s Jan 16 17:59:54.554540 kernel: raid6: int64x2 gen() 4916 MB/s Jan 16 17:59:54.554693 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 16 17:59:54.554721 kernel: raid6: int64x1 gen() 5021 MB/s Jan 16 17:59:54.554730 kernel: raid6: using algorithm neonx8 gen() 15689 MB/s Jan 16 17:59:54.554738 kernel: raid6: .... xor() 12013 MB/s, rmw enabled Jan 16 17:59:54.554750 kernel: raid6: using neon recovery algorithm Jan 16 17:59:54.554758 kernel: xor: measuring software checksum speed Jan 16 17:59:54.554766 kernel: 8regs : 21613 MB/sec Jan 16 17:59:54.554774 kernel: 32regs : 21704 MB/sec Jan 16 17:59:54.554783 kernel: arm64_neon : 25544 MB/sec Jan 16 17:59:54.554791 kernel: xor: using function: arm64_neon (25544 MB/sec) Jan 16 17:59:54.554800 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 16 17:59:54.554810 kernel: BTRFS: device fsid 772c9e2d-7e98-4acf-842c-b5416fff0f38 devid 1 transid 34 /dev/mapper/usr (254:0) scanned by mount (213) Jan 16 17:59:54.554818 kernel: BTRFS info (device dm-0): first mount of filesystem 772c9e2d-7e98-4acf-842c-b5416fff0f38 Jan 16 17:59:54.554827 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 16 17:59:54.554835 kernel: BTRFS info (device dm-0): enabling ssd optimizations Jan 16 17:59:54.554844 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 16 17:59:54.554852 kernel: BTRFS info (device dm-0): enabling free space tree Jan 16 17:59:54.554861 kernel: loop: module loaded Jan 16 17:59:54.554870 kernel: loop0: detected capacity change from 0 to 91832 Jan 16 17:59:54.554879 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 16 17:59:54.554998 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 16 17:59:54.555013 systemd[1]: Successfully made /usr/ read-only. Jan 16 17:59:54.555024 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 16 17:59:54.555036 systemd[1]: Detected virtualization kvm. Jan 16 17:59:54.555044 systemd[1]: Detected architecture arm64. Jan 16 17:59:54.555053 systemd[1]: Running in initrd. Jan 16 17:59:54.555062 systemd[1]: No hostname configured, using default hostname. Jan 16 17:59:54.555071 systemd[1]: Hostname set to . Jan 16 17:59:54.555080 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 16 17:59:54.555088 systemd[1]: Queued start job for default target initrd.target. Jan 16 17:59:54.555098 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 16 17:59:54.555106 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 17:59:54.555115 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 17:59:54.555125 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 16 17:59:54.555134 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 17:59:54.555143 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 16 17:59:54.555154 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 16 17:59:54.555163 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 17:59:54.555172 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 17:59:54.555181 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 16 17:59:54.555189 systemd[1]: Reached target paths.target - Path Units. Jan 16 17:59:54.555199 systemd[1]: Reached target slices.target - Slice Units. Jan 16 17:59:54.555210 systemd[1]: Reached target swap.target - Swaps. Jan 16 17:59:54.555220 systemd[1]: Reached target timers.target - Timer Units. Jan 16 17:59:54.555228 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 17:59:54.555238 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 17:59:54.555248 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 16 17:59:54.555259 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 16 17:59:54.555269 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 16 17:59:54.555280 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 17:59:54.555290 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 17:59:54.555300 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 17:59:54.555311 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 17:59:54.555320 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 16 17:59:54.555331 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 16 17:59:54.555340 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 17:59:54.555351 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 16 17:59:54.555361 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 16 17:59:54.555370 systemd[1]: Starting systemd-fsck-usr.service... Jan 16 17:59:54.555381 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 17:59:54.555390 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 17:59:54.555402 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 17:59:54.555413 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 16 17:59:54.555422 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 17:59:54.555431 systemd[1]: Finished systemd-fsck-usr.service. Jan 16 17:59:54.555511 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 17:59:54.555569 systemd-journald[350]: Collecting audit messages is enabled. Jan 16 17:59:54.555592 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 16 17:59:54.555601 kernel: Bridge firewalling registered Jan 16 17:59:54.555613 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 17:59:54.556697 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 17:59:54.556732 kernel: audit: type=1130 audit(1768586394.533:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:54.556742 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 17:59:54.556751 kernel: audit: type=1130 audit(1768586394.537:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:54.556760 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 16 17:59:54.556775 kernel: audit: type=1130 audit(1768586394.541:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:54.556783 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 17:59:54.556795 systemd-journald[350]: Journal started Jan 16 17:59:54.556820 systemd-journald[350]: Runtime Journal (/run/log/journal/8df9800b1daa40b2a4f5723db35efbf5) is 8M, max 76.5M, 68.5M free. Jan 16 17:59:54.533000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:54.537000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:54.541000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:54.532253 systemd-modules-load[351]: Inserted module 'br_netfilter' Jan 16 17:59:54.561211 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 17:59:54.561264 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 17:59:54.564216 kernel: audit: type=1130 audit(1768586394.561:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:54.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:54.571058 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 17:59:54.582745 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 17:59:54.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:54.590743 kernel: audit: type=1130 audit(1768586394.582:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:54.591477 systemd-tmpfiles[373]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 16 17:59:54.598775 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 17:59:54.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:54.601000 audit: BPF prog-id=6 op=LOAD Jan 16 17:59:54.602559 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 17:59:54.604378 kernel: audit: type=1130 audit(1768586394.598:7): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:54.604404 kernel: audit: type=1334 audit(1768586394.601:8): prog-id=6 op=LOAD Jan 16 17:59:54.606686 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 17:59:54.607851 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 17:59:54.606000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:54.614162 kernel: audit: type=1130 audit(1768586394.606:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:54.614204 kernel: audit: type=1130 audit(1768586394.607:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:54.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:54.616195 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 16 17:59:54.640478 dracut-cmdline[392]: dracut-109 Jan 16 17:59:54.648558 dracut-cmdline[392]: Using kernel command line parameters: SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=hetzner verity.usrhash=924fb3eb04ba1d8edcb66284d30e3342855b0579b62556e7722bcf37e82bda13 Jan 16 17:59:54.668157 systemd-resolved[385]: Positive Trust Anchors: Jan 16 17:59:54.668172 systemd-resolved[385]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 17:59:54.668175 systemd-resolved[385]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 16 17:59:54.668207 systemd-resolved[385]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 17:59:54.706884 systemd-resolved[385]: Defaulting to hostname 'linux'. Jan 16 17:59:54.710360 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 17:59:54.711000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:54.712091 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 17:59:54.785690 kernel: Loading iSCSI transport class v2.0-870. Jan 16 17:59:54.796670 kernel: iscsi: registered transport (tcp) Jan 16 17:59:54.811756 kernel: iscsi: registered transport (qla4xxx) Jan 16 17:59:54.811845 kernel: QLogic iSCSI HBA Driver Jan 16 17:59:54.843538 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 16 17:59:54.874005 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 17:59:54.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:54.877417 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 16 17:59:54.939386 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 16 17:59:54.940000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:54.942064 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 16 17:59:54.943449 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 16 17:59:54.987207 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 16 17:59:54.988000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:54.988000 audit: BPF prog-id=7 op=LOAD Jan 16 17:59:54.988000 audit: BPF prog-id=8 op=LOAD Jan 16 17:59:54.990413 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 17:59:55.030302 systemd-udevd[622]: Using default interface naming scheme 'v257'. Jan 16 17:59:55.041318 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 17:59:55.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:55.046576 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 16 17:59:55.077585 dracut-pre-trigger[680]: rd.md=0: removing MD RAID activation Jan 16 17:59:55.109000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:55.109766 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 17:59:55.112000 audit: BPF prog-id=9 op=LOAD Jan 16 17:59:55.115605 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 17:59:55.117754 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 17:59:55.119000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:55.121789 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 17:59:55.166108 systemd-networkd[754]: lo: Link UP Jan 16 17:59:55.166116 systemd-networkd[754]: lo: Gained carrier Jan 16 17:59:55.166856 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 17:59:55.169000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:55.169794 systemd[1]: Reached target network.target - Network. Jan 16 17:59:55.203529 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 17:59:55.208000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:55.211800 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 16 17:59:55.362684 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 16 17:59:55.367787 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 16 17:59:55.375039 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:02.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 16 17:59:55.394303 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - QEMU_HARDDISK EFI-SYSTEM. Jan 16 17:59:55.412332 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - QEMU_HARDDISK ROOT. Jan 16 17:59:55.423681 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - QEMU_HARDDISK USR-A. Jan 16 17:59:55.435373 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 16 17:59:55.435755 kernel: usbcore: registered new interface driver usbhid Jan 16 17:59:55.435770 kernel: usbhid: USB HID core driver Jan 16 17:59:55.442070 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 16 17:59:55.443860 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 16 17:59:55.447995 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 17:59:55.448127 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 17:59:55.449810 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 17:59:55.449000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:55.453934 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 17:59:55.476083 disk-uuid[807]: Primary Header is updated. Jan 16 17:59:55.476083 disk-uuid[807]: Secondary Entries is updated. Jan 16 17:59:55.476083 disk-uuid[807]: Secondary Header is updated. Jan 16 17:59:55.495141 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 17:59:55.496000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:55.499675 systemd-networkd[754]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 17:59:55.499684 systemd-networkd[754]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 17:59:55.502921 systemd-networkd[754]: eth1: Link UP Jan 16 17:59:55.503095 systemd-networkd[754]: eth1: Gained carrier Jan 16 17:59:55.503110 systemd-networkd[754]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 17:59:55.511772 systemd-networkd[754]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 17:59:55.511782 systemd-networkd[754]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 17:59:55.512191 systemd-networkd[754]: eth0: Link UP Jan 16 17:59:55.514855 systemd-networkd[754]: eth0: Gained carrier Jan 16 17:59:55.514873 systemd-networkd[754]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 17:59:55.546899 systemd-networkd[754]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 16 17:59:55.567739 systemd-networkd[754]: eth0: DHCPv4 address 167.235.246.183/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 16 17:59:55.629913 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 16 17:59:55.630000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:55.631018 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 17:59:55.632404 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 17:59:55.633842 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 17:59:55.636722 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 16 17:59:55.661822 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 16 17:59:55.663000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:56.541650 disk-uuid[809]: Warning: The kernel is still using the old partition table. Jan 16 17:59:56.541650 disk-uuid[809]: The new table will be used at the next reboot or after you Jan 16 17:59:56.541650 disk-uuid[809]: run partprobe(8) or kpartx(8) Jan 16 17:59:56.541650 disk-uuid[809]: The operation has completed successfully. Jan 16 17:59:56.551940 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 16 17:59:56.552062 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 16 17:59:56.552000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:56.552000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:56.554605 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 16 17:59:56.592665 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (836) Jan 16 17:59:56.592757 kernel: BTRFS info (device sda6): first mount of filesystem 5e96ab2e-f088-4ca2-ba97-55451a1893dc Jan 16 17:59:56.593930 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 16 17:59:56.597768 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 16 17:59:56.597832 kernel: BTRFS info (device sda6): turning on async discard Jan 16 17:59:56.597851 kernel: BTRFS info (device sda6): enabling free space tree Jan 16 17:59:56.605652 kernel: BTRFS info (device sda6): last unmount of filesystem 5e96ab2e-f088-4ca2-ba97-55451a1893dc Jan 16 17:59:56.607152 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 16 17:59:56.607000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:56.609342 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 16 17:59:56.691965 systemd-networkd[754]: eth1: Gained IPv6LL Jan 16 17:59:56.762092 ignition[855]: Ignition 2.24.0 Jan 16 17:59:56.762111 ignition[855]: Stage: fetch-offline Jan 16 17:59:56.764310 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 17:59:56.764000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:56.762155 ignition[855]: no configs at "/usr/lib/ignition/base.d" Jan 16 17:59:56.762164 ignition[855]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 17:59:56.762331 ignition[855]: parsed url from cmdline: "" Jan 16 17:59:56.762334 ignition[855]: no config URL provided Jan 16 17:59:56.762339 ignition[855]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 17:59:56.762348 ignition[855]: no config at "/usr/lib/ignition/user.ign" Jan 16 17:59:56.768124 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 16 17:59:56.762353 ignition[855]: failed to fetch config: resource requires networking Jan 16 17:59:56.762515 ignition[855]: Ignition finished successfully Jan 16 17:59:56.809539 ignition[862]: Ignition 2.24.0 Jan 16 17:59:56.809554 ignition[862]: Stage: fetch Jan 16 17:59:56.810452 ignition[862]: no configs at "/usr/lib/ignition/base.d" Jan 16 17:59:56.810462 ignition[862]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 17:59:56.810553 ignition[862]: parsed url from cmdline: "" Jan 16 17:59:56.810556 ignition[862]: no config URL provided Jan 16 17:59:56.810561 ignition[862]: reading system config file "/usr/lib/ignition/user.ign" Jan 16 17:59:56.810566 ignition[862]: no config at "/usr/lib/ignition/user.ign" Jan 16 17:59:56.811074 ignition[862]: GET http://169.254.169.254/hetzner/v1/userdata: attempt #1 Jan 16 17:59:56.819140 ignition[862]: GET result: OK Jan 16 17:59:56.819285 ignition[862]: parsing config with SHA512: d15668ecf7d6b90615c6cc4cb4051ef2326bfbdbcc08aa5cea974fd90e8bb1251e0600a45006729740bf91ca3870a28c202c3ac687a90e8ec225c60f4d31870c Jan 16 17:59:56.827106 unknown[862]: fetched base config from "system" Jan 16 17:59:56.827123 unknown[862]: fetched base config from "system" Jan 16 17:59:56.827130 unknown[862]: fetched user config from "hetzner" Jan 16 17:59:56.829192 ignition[862]: fetch: fetch complete Jan 16 17:59:56.829215 ignition[862]: fetch: fetch passed Jan 16 17:59:56.831775 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 16 17:59:56.832000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:56.829302 ignition[862]: Ignition finished successfully Jan 16 17:59:56.835593 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 16 17:59:56.869392 ignition[869]: Ignition 2.24.0 Jan 16 17:59:56.870092 ignition[869]: Stage: kargs Jan 16 17:59:56.870583 ignition[869]: no configs at "/usr/lib/ignition/base.d" Jan 16 17:59:56.870593 ignition[869]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 17:59:56.871514 ignition[869]: kargs: kargs passed Jan 16 17:59:56.871564 ignition[869]: Ignition finished successfully Jan 16 17:59:56.874666 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 16 17:59:56.875000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:56.877264 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 16 17:59:56.905887 ignition[875]: Ignition 2.24.0 Jan 16 17:59:56.905908 ignition[875]: Stage: disks Jan 16 17:59:56.906123 ignition[875]: no configs at "/usr/lib/ignition/base.d" Jan 16 17:59:56.906132 ignition[875]: no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 17:59:56.907013 ignition[875]: disks: disks passed Jan 16 17:59:56.907066 ignition[875]: Ignition finished successfully Jan 16 17:59:56.911069 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 16 17:59:56.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:56.912834 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 16 17:59:56.913518 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 16 17:59:56.914907 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 17:59:56.916038 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 17:59:56.917084 systemd[1]: Reached target basic.target - Basic System. Jan 16 17:59:56.919242 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 16 17:59:56.964951 systemd-fsck[883]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 16 17:59:56.970729 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 16 17:59:56.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:56.976068 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 16 17:59:57.013817 systemd-networkd[754]: eth0: Gained IPv6LL Jan 16 17:59:57.066834 kernel: EXT4-fs (sda9): mounted filesystem 3360ad79-d1e3-4f32-ae7d-4a8c0a3c719d r/w with ordered data mode. Quota mode: none. Jan 16 17:59:57.068354 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 16 17:59:57.070702 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 16 17:59:57.074861 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 17:59:57.076546 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 16 17:59:57.080402 systemd[1]: Starting flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent... Jan 16 17:59:57.081162 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 16 17:59:57.081195 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 17:59:57.101825 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 16 17:59:57.104971 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 16 17:59:57.134853 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (891) Jan 16 17:59:57.134931 kernel: BTRFS info (device sda6): first mount of filesystem 5e96ab2e-f088-4ca2-ba97-55451a1893dc Jan 16 17:59:57.134948 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 16 17:59:57.141632 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 16 17:59:57.141736 kernel: BTRFS info (device sda6): turning on async discard Jan 16 17:59:57.141749 kernel: BTRFS info (device sda6): enabling free space tree Jan 16 17:59:57.144512 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 17:59:57.171236 coreos-metadata[893]: Jan 16 17:59:57.171 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/hostname: Attempt #1 Jan 16 17:59:57.172956 coreos-metadata[893]: Jan 16 17:59:57.172 INFO Fetch successful Jan 16 17:59:57.174139 coreos-metadata[893]: Jan 16 17:59:57.174 INFO wrote hostname ci-4580-0-0-p-e4bb445d88 to /sysroot/etc/hostname Jan 16 17:59:57.176000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:57.176724 systemd[1]: Finished flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 17:59:57.306483 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 16 17:59:57.311079 kernel: kauditd_printk_skb: 25 callbacks suppressed Jan 16 17:59:57.311111 kernel: audit: type=1130 audit(1768586397.307:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:57.307000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:57.311150 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 16 17:59:57.314265 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 16 17:59:57.349654 kernel: BTRFS info (device sda6): last unmount of filesystem 5e96ab2e-f088-4ca2-ba97-55451a1893dc Jan 16 17:59:57.371783 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 16 17:59:57.372000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:57.375815 kernel: audit: type=1130 audit(1768586397.372:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:57.385893 ignition[992]: INFO : Ignition 2.24.0 Jan 16 17:59:57.385893 ignition[992]: INFO : Stage: mount Jan 16 17:59:57.389343 ignition[992]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 17:59:57.389343 ignition[992]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 17:59:57.389343 ignition[992]: INFO : mount: mount passed Jan 16 17:59:57.389343 ignition[992]: INFO : Ignition finished successfully Jan 16 17:59:57.391000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:57.390771 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 16 17:59:57.393649 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 16 17:59:57.396652 kernel: audit: type=1130 audit(1768586397.391:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:57.579511 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 16 17:59:57.583156 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 16 17:59:57.615314 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/sda6 (8:6) scanned by mount (1003) Jan 16 17:59:57.615381 kernel: BTRFS info (device sda6): first mount of filesystem 5e96ab2e-f088-4ca2-ba97-55451a1893dc Jan 16 17:59:57.615412 kernel: BTRFS info (device sda6): using crc32c (crc32c-generic) checksum algorithm Jan 16 17:59:57.619963 kernel: BTRFS info (device sda6): enabling ssd optimizations Jan 16 17:59:57.620041 kernel: BTRFS info (device sda6): turning on async discard Jan 16 17:59:57.620057 kernel: BTRFS info (device sda6): enabling free space tree Jan 16 17:59:57.622710 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 16 17:59:57.660297 ignition[1020]: INFO : Ignition 2.24.0 Jan 16 17:59:57.660297 ignition[1020]: INFO : Stage: files Jan 16 17:59:57.661493 ignition[1020]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 17:59:57.661493 ignition[1020]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 17:59:57.661493 ignition[1020]: DEBUG : files: compiled without relabeling support, skipping Jan 16 17:59:57.664143 ignition[1020]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 16 17:59:57.664143 ignition[1020]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 16 17:59:57.670197 ignition[1020]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 16 17:59:57.672285 ignition[1020]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 16 17:59:57.673610 unknown[1020]: wrote ssh authorized keys file for user: core Jan 16 17:59:57.676523 ignition[1020]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 16 17:59:57.676523 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 16 17:59:57.676523 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.0-linux-arm64.tar.gz: attempt #1 Jan 16 17:59:57.735905 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 16 17:59:57.839112 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.0-linux-arm64.tar.gz" Jan 16 17:59:57.841027 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 16 17:59:57.841027 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 16 17:59:57.841027 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 16 17:59:57.841027 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 16 17:59:57.841027 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 17:59:57.841027 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 16 17:59:57.841027 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 17:59:57.841027 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 16 17:59:57.853664 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 17:59:57.853664 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 16 17:59:57.853664 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 16 17:59:57.853664 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 16 17:59:57.853664 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 16 17:59:57.853664 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.32.4-arm64.raw: attempt #1 Jan 16 17:59:58.180870 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 16 17:59:58.741679 ignition[1020]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.32.4-arm64.raw" Jan 16 17:59:58.741679 ignition[1020]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 16 17:59:58.745359 ignition[1020]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 17:59:58.749046 ignition[1020]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 16 17:59:58.749046 ignition[1020]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 16 17:59:58.749046 ignition[1020]: INFO : files: op(d): [started] processing unit "coreos-metadata.service" Jan 16 17:59:58.753072 ignition[1020]: INFO : files: op(d): op(e): [started] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 16 17:59:58.753072 ignition[1020]: INFO : files: op(d): op(e): [finished] writing systemd drop-in "00-custom-metadata.conf" at "/sysroot/etc/systemd/system/coreos-metadata.service.d/00-custom-metadata.conf" Jan 16 17:59:58.753072 ignition[1020]: INFO : files: op(d): [finished] processing unit "coreos-metadata.service" Jan 16 17:59:58.753072 ignition[1020]: INFO : files: op(f): [started] setting preset to enabled for "prepare-helm.service" Jan 16 17:59:58.753072 ignition[1020]: INFO : files: op(f): [finished] setting preset to enabled for "prepare-helm.service" Jan 16 17:59:58.753072 ignition[1020]: INFO : files: createResultFile: createFiles: op(10): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 16 17:59:58.753072 ignition[1020]: INFO : files: createResultFile: createFiles: op(10): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 16 17:59:58.753072 ignition[1020]: INFO : files: files passed Jan 16 17:59:58.753072 ignition[1020]: INFO : Ignition finished successfully Jan 16 17:59:58.763716 kernel: audit: type=1130 audit(1768586398.754:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:58.754000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:58.753489 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 16 17:59:58.756172 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 16 17:59:58.760888 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 16 17:59:58.778983 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 16 17:59:58.779152 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 16 17:59:58.780000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:58.780000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:58.785336 kernel: audit: type=1130 audit(1768586398.780:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:58.785395 kernel: audit: type=1131 audit(1768586398.780:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:58.794587 initrd-setup-root-after-ignition[1052]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 17:59:58.794587 initrd-setup-root-after-ignition[1052]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 16 17:59:58.797616 initrd-setup-root-after-ignition[1056]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 16 17:59:58.801914 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 17:59:58.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:58.806531 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 16 17:59:58.808687 kernel: audit: type=1130 audit(1768586398.803:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:58.808891 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 16 17:59:58.889097 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 16 17:59:58.889288 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 16 17:59:58.891226 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 16 17:59:58.897258 kernel: audit: type=1130 audit(1768586398.890:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:58.897289 kernel: audit: type=1131 audit(1768586398.890:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:58.890000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:58.890000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:58.896173 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 16 17:59:58.898274 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 16 17:59:58.899330 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 16 17:59:58.933104 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 17:59:58.931000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:58.938178 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 16 17:59:58.939073 kernel: audit: type=1130 audit(1768586398.931:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:58.967604 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 16 17:59:58.967888 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 16 17:59:58.969111 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 17:59:58.970359 systemd[1]: Stopped target timers.target - Timer Units. Jan 16 17:59:58.971564 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 16 17:59:58.971750 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 16 17:59:58.972000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:58.973253 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 16 17:59:58.973947 systemd[1]: Stopped target basic.target - Basic System. Jan 16 17:59:58.975217 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 16 17:59:58.976421 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 16 17:59:58.977520 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 16 17:59:58.978724 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 16 17:59:58.980014 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 16 17:59:58.981132 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 16 17:59:58.982324 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 16 17:59:58.983321 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 16 17:59:58.984468 systemd[1]: Stopped target swap.target - Swaps. Jan 16 17:59:58.985000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:58.985405 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 16 17:59:58.985541 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 16 17:59:58.986891 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 16 17:59:58.987525 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 17:59:58.991000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:58.988685 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 16 17:59:58.988766 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 17:59:58.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:58.990012 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 16 17:59:58.993000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:58.990142 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 16 17:59:58.994000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-metadata-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:58.991833 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 16 17:59:58.991967 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 16 17:59:58.993298 systemd[1]: ignition-files.service: Deactivated successfully. Jan 16 17:59:58.993397 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 16 17:59:58.994515 systemd[1]: flatcar-metadata-hostname.service: Deactivated successfully. Jan 16 17:59:58.994612 systemd[1]: Stopped flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent. Jan 16 17:59:58.996597 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 16 17:59:59.000996 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 16 17:59:59.004750 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 16 17:59:59.004961 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 17:59:59.007000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:59.008004 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 16 17:59:59.009000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:59.008126 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 17:59:59.011000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:59.009808 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 16 17:59:59.009932 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 16 17:59:59.018000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:59.018000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:59.017322 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 16 17:59:59.017420 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 16 17:59:59.033743 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 16 17:59:59.036763 ignition[1076]: INFO : Ignition 2.24.0 Jan 16 17:59:59.036763 ignition[1076]: INFO : Stage: umount Jan 16 17:59:59.036763 ignition[1076]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 16 17:59:59.036763 ignition[1076]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/hetzner" Jan 16 17:59:59.042038 ignition[1076]: INFO : umount: umount passed Jan 16 17:59:59.042603 ignition[1076]: INFO : Ignition finished successfully Jan 16 17:59:59.046067 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 16 17:59:59.046235 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 16 17:59:59.047000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:59.048369 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 16 17:59:59.048000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:59.048479 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 16 17:59:59.049948 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 16 17:59:59.050066 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 16 17:59:59.052000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:59.052258 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 16 17:59:59.053000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:59.052329 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 16 17:59:59.054000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:59.053167 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 16 17:59:59.053208 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 16 17:59:59.056000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:59.054173 systemd[1]: Stopped target network.target - Network. Jan 16 17:59:59.055043 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 16 17:59:59.055100 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 16 17:59:59.056234 systemd[1]: Stopped target paths.target - Path Units. Jan 16 17:59:59.057210 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 16 17:59:59.060716 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 17:59:59.063121 systemd[1]: Stopped target slices.target - Slice Units. Jan 16 17:59:59.065006 systemd[1]: Stopped target sockets.target - Socket Units. Jan 16 17:59:59.066041 systemd[1]: iscsid.socket: Deactivated successfully. Jan 16 17:59:59.066110 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 16 17:59:59.067444 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 16 17:59:59.067481 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 16 17:59:59.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:59.068726 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 16 17:59:59.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:59.068757 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 16 17:59:59.073000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:59.069989 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 16 17:59:59.070050 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 16 17:59:59.071245 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 16 17:59:59.071288 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 16 17:59:59.072598 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 16 17:59:59.072728 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 16 17:59:59.074215 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 16 17:59:59.075479 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 16 17:59:59.085169 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 16 17:59:59.086000 audit: BPF prog-id=6 op=UNLOAD Jan 16 17:59:59.085353 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 16 17:59:59.087000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:59.090411 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 16 17:59:59.090539 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 16 17:59:59.091000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:59.094000 audit: BPF prog-id=9 op=UNLOAD Jan 16 17:59:59.094743 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 16 17:59:59.095414 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 16 17:59:59.095467 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 16 17:59:59.097441 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 16 17:59:59.099758 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 16 17:59:59.100000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:59.099853 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 16 17:59:59.101761 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 16 17:59:59.103000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:59.104000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:59.101823 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 16 17:59:59.103912 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 16 17:59:59.103984 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 16 17:59:59.105044 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 17:59:59.132572 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 16 17:59:59.132842 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 17:59:59.133000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:59.134422 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 16 17:59:59.134464 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 16 17:59:59.136326 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 16 17:59:59.138000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:59.136357 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 17:59:59.140000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:59.137285 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 16 17:59:59.137335 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 16 17:59:59.143000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:59.139973 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 16 17:59:59.140040 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 16 17:59:59.141229 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 16 17:59:59.141287 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 16 17:59:59.146989 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 16 17:59:59.149701 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 16 17:59:59.149806 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 17:59:59.152000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:59.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:59.153000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:59.152921 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 16 17:59:59.152996 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 17:59:59.156000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:59.153758 systemd[1]: systemd-tmpfiles-setup-dev-early.service: Deactivated successfully. Jan 16 17:59:59.158000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:59.153805 systemd[1]: Stopped systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 17:59:59.154572 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 16 17:59:59.154615 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 17:59:59.156955 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 17:59:59.157017 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 17:59:59.158988 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 16 17:59:59.161790 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 16 17:59:59.162000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:59.169526 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 16 17:59:59.169716 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 16 17:59:59.170000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:59.170000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 17:59:59.170940 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 16 17:59:59.173039 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 16 17:59:59.193802 systemd[1]: Switching root. Jan 16 17:59:59.242090 systemd-journald[350]: Journal stopped Jan 16 18:00:00.381441 systemd-journald[350]: Received SIGTERM from PID 1 (systemd). Jan 16 18:00:00.381516 kernel: SELinux: policy capability network_peer_controls=1 Jan 16 18:00:00.381535 kernel: SELinux: policy capability open_perms=1 Jan 16 18:00:00.381547 kernel: SELinux: policy capability extended_socket_class=1 Jan 16 18:00:00.381557 kernel: SELinux: policy capability always_check_network=0 Jan 16 18:00:00.381566 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 16 18:00:00.381577 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 16 18:00:00.381591 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 16 18:00:00.381601 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 16 18:00:00.381615 kernel: SELinux: policy capability userspace_initial_context=0 Jan 16 18:00:00.382723 systemd[1]: Successfully loaded SELinux policy in 68.138ms. Jan 16 18:00:00.382770 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.874ms. Jan 16 18:00:00.382789 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 16 18:00:00.382803 systemd[1]: Detected virtualization kvm. Jan 16 18:00:00.382818 systemd[1]: Detected architecture arm64. Jan 16 18:00:00.382838 systemd[1]: Detected first boot. Jan 16 18:00:00.382851 systemd[1]: Hostname set to . Jan 16 18:00:00.382866 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 16 18:00:00.382879 zram_generator::config[1119]: No configuration found. Jan 16 18:00:00.382898 kernel: NET: Registered PF_VSOCK protocol family Jan 16 18:00:00.382911 systemd[1]: Populated /etc with preset unit settings. Jan 16 18:00:00.382923 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 16 18:00:00.382938 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 16 18:00:00.382950 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 16 18:00:00.382965 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 16 18:00:00.382977 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 16 18:00:00.382993 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 16 18:00:00.383006 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 16 18:00:00.383019 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 16 18:00:00.383035 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 16 18:00:00.383048 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 16 18:00:00.383060 systemd[1]: Created slice user.slice - User and Session Slice. Jan 16 18:00:00.383074 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 16 18:00:00.383087 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 16 18:00:00.383100 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 16 18:00:00.383113 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 16 18:00:00.383128 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 16 18:00:00.383141 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 16 18:00:00.383153 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 16 18:00:00.383166 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 16 18:00:00.383179 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 16 18:00:00.383194 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 16 18:00:00.383207 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 16 18:00:00.383219 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 16 18:00:00.383232 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 16 18:00:00.383244 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 16 18:00:00.383256 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 16 18:00:00.383269 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 16 18:00:00.383284 systemd[1]: Reached target slices.target - Slice Units. Jan 16 18:00:00.383297 systemd[1]: Reached target swap.target - Swaps. Jan 16 18:00:00.383309 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 16 18:00:00.383322 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 16 18:00:00.383335 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 16 18:00:00.383347 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 16 18:00:00.383362 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 16 18:00:00.383378 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 16 18:00:00.383390 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 16 18:00:00.383403 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 16 18:00:00.383417 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 16 18:00:00.383430 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 16 18:00:00.383444 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 16 18:00:00.383456 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 16 18:00:00.383471 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 16 18:00:00.383484 systemd[1]: Mounting media.mount - External Media Directory... Jan 16 18:00:00.383497 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 16 18:00:00.383511 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 16 18:00:00.383524 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 16 18:00:00.383537 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 16 18:00:00.383550 systemd[1]: Reached target machines.target - Containers. Jan 16 18:00:00.383564 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 16 18:00:00.383577 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 18:00:00.383590 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 16 18:00:00.383603 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 16 18:00:00.383616 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 18:00:00.384746 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 18:00:00.384775 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 18:00:00.384797 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 16 18:00:00.384810 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 18:00:00.384824 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 16 18:00:00.384839 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 16 18:00:00.384853 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 16 18:00:00.384866 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 16 18:00:00.384878 systemd[1]: Stopped systemd-fsck-usr.service. Jan 16 18:00:00.384893 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 16 18:00:00.384905 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 16 18:00:00.384920 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 16 18:00:00.384933 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 16 18:00:00.384947 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 16 18:00:00.384960 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 16 18:00:00.384972 kernel: fuse: init (API version 7.41) Jan 16 18:00:00.384985 kernel: ACPI: bus type drm_connector registered Jan 16 18:00:00.384998 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 16 18:00:00.385013 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 16 18:00:00.385026 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 16 18:00:00.385038 systemd[1]: Mounted media.mount - External Media Directory. Jan 16 18:00:00.385052 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 16 18:00:00.385066 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 16 18:00:00.385083 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 16 18:00:00.385096 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 16 18:00:00.385108 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 16 18:00:00.385121 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 16 18:00:00.385134 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 18:00:00.385147 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 18:00:00.385160 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 18:00:00.385175 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 18:00:00.385187 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 18:00:00.385200 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 18:00:00.385213 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 16 18:00:00.385226 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 16 18:00:00.385238 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 18:00:00.385251 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 18:00:00.385265 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 16 18:00:00.385279 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 16 18:00:00.385293 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 16 18:00:00.385306 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 16 18:00:00.385320 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 16 18:00:00.385334 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 16 18:00:00.385347 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 16 18:00:00.385362 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 16 18:00:00.385377 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 16 18:00:00.385389 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 16 18:00:00.385460 systemd-journald[1184]: Collecting audit messages is enabled. Jan 16 18:00:00.385493 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 18:00:00.385507 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 16 18:00:00.385522 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 16 18:00:00.385535 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 18:00:00.385549 systemd-journald[1184]: Journal started Jan 16 18:00:00.385574 systemd-journald[1184]: Runtime Journal (/run/log/journal/8df9800b1daa40b2a4f5723db35efbf5) is 8M, max 76.5M, 68.5M free. Jan 16 18:00:00.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:00.193000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:00.195000 audit: BPF prog-id=14 op=UNLOAD Jan 16 18:00:00.195000 audit: BPF prog-id=13 op=UNLOAD Jan 16 18:00:00.196000 audit: BPF prog-id=15 op=LOAD Jan 16 18:00:00.196000 audit: BPF prog-id=16 op=LOAD Jan 16 18:00:00.196000 audit: BPF prog-id=17 op=LOAD Jan 16 18:00:00.299000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:00.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:00.308000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:00.311000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:00.311000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:00.317000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:00.317000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:00.321000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:00.321000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:00.325000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:00.325000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:00.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:00.331000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:00.335000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:00.340000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:00.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:00.376000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 16 18:00:00.376000 audit[1184]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=6 a1=ffffe5d1d4a0 a2=4000 a3=0 items=0 ppid=1 pid=1184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:00.376000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 16 18:00:00.000043 systemd[1]: Queued start job for default target multi-user.target. Jan 16 18:00:00.007426 systemd[1]: Unnecessary job was removed for dev-sda6.device - /dev/sda6. Jan 16 18:00:00.008054 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 16 18:00:00.395248 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 16 18:00:00.395331 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 18:00:00.400692 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 16 18:00:00.407839 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 16 18:00:00.416679 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 16 18:00:00.421905 systemd[1]: Started systemd-journald.service - Journal Service. Jan 16 18:00:00.421000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:00.426838 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 16 18:00:00.428000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:00.429938 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 16 18:00:00.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:00.433000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:00.432069 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 16 18:00:00.434894 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 16 18:00:00.436195 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 16 18:00:00.442000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:00.442751 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 16 18:00:00.455773 kernel: loop1: detected capacity change from 0 to 100192 Jan 16 18:00:00.465000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:00.465432 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 16 18:00:00.469726 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 16 18:00:00.474872 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 16 18:00:00.481960 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 16 18:00:00.499729 kernel: loop2: detected capacity change from 0 to 8 Jan 16 18:00:00.503243 systemd-journald[1184]: Time spent on flushing to /var/log/journal/8df9800b1daa40b2a4f5723db35efbf5 is 33.236ms for 1303 entries. Jan 16 18:00:00.503243 systemd-journald[1184]: System Journal (/var/log/journal/8df9800b1daa40b2a4f5723db35efbf5) is 8M, max 588.1M, 580.1M free. Jan 16 18:00:00.549190 systemd-journald[1184]: Received client request to flush runtime journal. Jan 16 18:00:00.549232 kernel: loop3: detected capacity change from 0 to 207008 Jan 16 18:00:00.522000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:00.531000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:00.513269 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Jan 16 18:00:00.513282 systemd-tmpfiles[1221]: ACLs are not supported, ignoring. Jan 16 18:00:00.521903 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 16 18:00:00.527927 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 16 18:00:00.537963 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 16 18:00:00.555788 kernel: loop4: detected capacity change from 0 to 45344 Jan 16 18:00:00.557931 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 16 18:00:00.558000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:00.596687 kernel: loop5: detected capacity change from 0 to 100192 Jan 16 18:00:00.599359 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 16 18:00:00.600000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:00.602000 audit: BPF prog-id=18 op=LOAD Jan 16 18:00:00.602000 audit: BPF prog-id=19 op=LOAD Jan 16 18:00:00.603000 audit: BPF prog-id=20 op=LOAD Jan 16 18:00:00.606041 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 16 18:00:00.609000 audit: BPF prog-id=21 op=LOAD Jan 16 18:00:00.612937 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 16 18:00:00.618023 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 16 18:00:00.620700 kernel: loop6: detected capacity change from 0 to 8 Jan 16 18:00:00.625000 audit: BPF prog-id=22 op=LOAD Jan 16 18:00:00.625000 audit: BPF prog-id=23 op=LOAD Jan 16 18:00:00.625000 audit: BPF prog-id=24 op=LOAD Jan 16 18:00:00.628674 kernel: loop7: detected capacity change from 0 to 207008 Jan 16 18:00:00.631212 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 16 18:00:00.636000 audit: BPF prog-id=25 op=LOAD Jan 16 18:00:00.636000 audit: BPF prog-id=26 op=LOAD Jan 16 18:00:00.636000 audit: BPF prog-id=27 op=LOAD Jan 16 18:00:00.639993 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 16 18:00:00.664586 kernel: loop1: detected capacity change from 0 to 45344 Jan 16 18:00:00.665420 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Jan 16 18:00:00.665734 systemd-tmpfiles[1268]: ACLs are not supported, ignoring. Jan 16 18:00:00.674984 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 16 18:00:00.676000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:00.686916 (sd-merge)[1264]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-hetzner.raw'. Jan 16 18:00:00.696395 (sd-merge)[1264]: Merged extensions into '/usr'. Jan 16 18:00:00.711911 systemd[1]: Reload requested from client PID 1219 ('systemd-sysext') (unit systemd-sysext.service)... Jan 16 18:00:00.711933 systemd[1]: Reloading... Jan 16 18:00:00.746270 systemd-nsresourced[1270]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 16 18:00:00.847698 zram_generator::config[1316]: No configuration found. Jan 16 18:00:00.922286 systemd-oomd[1266]: No swap; memory pressure usage will be degraded Jan 16 18:00:00.942753 systemd-resolved[1267]: Positive Trust Anchors: Jan 16 18:00:00.943107 systemd-resolved[1267]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 16 18:00:00.943153 systemd-resolved[1267]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 16 18:00:00.943222 systemd-resolved[1267]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 16 18:00:00.950829 systemd-resolved[1267]: Using system hostname 'ci-4580-0-0-p-e4bb445d88'. Jan 16 18:00:01.066859 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 16 18:00:01.067117 systemd[1]: Reloading finished in 354 ms. Jan 16 18:00:01.088155 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 16 18:00:01.088000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:01.089038 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 16 18:00:01.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:01.089980 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 16 18:00:01.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:01.090780 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 16 18:00:01.090000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:01.091743 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 16 18:00:01.091000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:01.096097 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 16 18:00:01.106859 systemd[1]: Starting ensure-sysext.service... Jan 16 18:00:01.111886 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 16 18:00:01.112000 audit: BPF prog-id=28 op=LOAD Jan 16 18:00:01.112000 audit: BPF prog-id=21 op=UNLOAD Jan 16 18:00:01.116000 audit: BPF prog-id=29 op=LOAD Jan 16 18:00:01.116000 audit: BPF prog-id=25 op=UNLOAD Jan 16 18:00:01.116000 audit: BPF prog-id=30 op=LOAD Jan 16 18:00:01.116000 audit: BPF prog-id=31 op=LOAD Jan 16 18:00:01.116000 audit: BPF prog-id=26 op=UNLOAD Jan 16 18:00:01.116000 audit: BPF prog-id=27 op=UNLOAD Jan 16 18:00:01.117000 audit: BPF prog-id=32 op=LOAD Jan 16 18:00:01.117000 audit: BPF prog-id=22 op=UNLOAD Jan 16 18:00:01.117000 audit: BPF prog-id=33 op=LOAD Jan 16 18:00:01.117000 audit: BPF prog-id=34 op=LOAD Jan 16 18:00:01.117000 audit: BPF prog-id=23 op=UNLOAD Jan 16 18:00:01.117000 audit: BPF prog-id=24 op=UNLOAD Jan 16 18:00:01.118000 audit: BPF prog-id=35 op=LOAD Jan 16 18:00:01.118000 audit: BPF prog-id=18 op=UNLOAD Jan 16 18:00:01.118000 audit: BPF prog-id=36 op=LOAD Jan 16 18:00:01.118000 audit: BPF prog-id=37 op=LOAD Jan 16 18:00:01.118000 audit: BPF prog-id=19 op=UNLOAD Jan 16 18:00:01.118000 audit: BPF prog-id=20 op=UNLOAD Jan 16 18:00:01.118000 audit: BPF prog-id=38 op=LOAD Jan 16 18:00:01.118000 audit: BPF prog-id=15 op=UNLOAD Jan 16 18:00:01.118000 audit: BPF prog-id=39 op=LOAD Jan 16 18:00:01.118000 audit: BPF prog-id=40 op=LOAD Jan 16 18:00:01.118000 audit: BPF prog-id=16 op=UNLOAD Jan 16 18:00:01.118000 audit: BPF prog-id=17 op=UNLOAD Jan 16 18:00:01.151846 systemd[1]: Reload requested from client PID 1350 ('systemctl') (unit ensure-sysext.service)... Jan 16 18:00:01.151869 systemd[1]: Reloading... Jan 16 18:00:01.156515 systemd-tmpfiles[1351]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 16 18:00:01.157285 systemd-tmpfiles[1351]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 16 18:00:01.157743 systemd-tmpfiles[1351]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 16 18:00:01.158937 systemd-tmpfiles[1351]: ACLs are not supported, ignoring. Jan 16 18:00:01.159171 systemd-tmpfiles[1351]: ACLs are not supported, ignoring. Jan 16 18:00:01.164700 systemd-tmpfiles[1351]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 18:00:01.164718 systemd-tmpfiles[1351]: Skipping /boot Jan 16 18:00:01.173344 systemd-tmpfiles[1351]: Detected autofs mount point /boot during canonicalization of boot. Jan 16 18:00:01.173487 systemd-tmpfiles[1351]: Skipping /boot Jan 16 18:00:01.239740 zram_generator::config[1386]: No configuration found. Jan 16 18:00:01.405527 systemd[1]: Reloading finished in 253 ms. Jan 16 18:00:01.419746 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 16 18:00:01.420000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:01.422000 audit: BPF prog-id=41 op=LOAD Jan 16 18:00:01.422000 audit: BPF prog-id=29 op=UNLOAD Jan 16 18:00:01.422000 audit: BPF prog-id=42 op=LOAD Jan 16 18:00:01.422000 audit: BPF prog-id=43 op=LOAD Jan 16 18:00:01.422000 audit: BPF prog-id=30 op=UNLOAD Jan 16 18:00:01.422000 audit: BPF prog-id=31 op=UNLOAD Jan 16 18:00:01.423000 audit: BPF prog-id=44 op=LOAD Jan 16 18:00:01.423000 audit: BPF prog-id=38 op=UNLOAD Jan 16 18:00:01.423000 audit: BPF prog-id=45 op=LOAD Jan 16 18:00:01.423000 audit: BPF prog-id=46 op=LOAD Jan 16 18:00:01.423000 audit: BPF prog-id=39 op=UNLOAD Jan 16 18:00:01.423000 audit: BPF prog-id=40 op=UNLOAD Jan 16 18:00:01.423000 audit: BPF prog-id=47 op=LOAD Jan 16 18:00:01.423000 audit: BPF prog-id=35 op=UNLOAD Jan 16 18:00:01.424000 audit: BPF prog-id=48 op=LOAD Jan 16 18:00:01.424000 audit: BPF prog-id=49 op=LOAD Jan 16 18:00:01.424000 audit: BPF prog-id=36 op=UNLOAD Jan 16 18:00:01.424000 audit: BPF prog-id=37 op=UNLOAD Jan 16 18:00:01.424000 audit: BPF prog-id=50 op=LOAD Jan 16 18:00:01.424000 audit: BPF prog-id=28 op=UNLOAD Jan 16 18:00:01.426000 audit: BPF prog-id=51 op=LOAD Jan 16 18:00:01.426000 audit: BPF prog-id=32 op=UNLOAD Jan 16 18:00:01.426000 audit: BPF prog-id=52 op=LOAD Jan 16 18:00:01.426000 audit: BPF prog-id=53 op=LOAD Jan 16 18:00:01.426000 audit: BPF prog-id=33 op=UNLOAD Jan 16 18:00:01.426000 audit: BPF prog-id=34 op=UNLOAD Jan 16 18:00:01.430856 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 16 18:00:01.430000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:01.441786 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 16 18:00:01.445949 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 16 18:00:01.457577 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 16 18:00:01.462861 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 16 18:00:01.463000 audit: BPF prog-id=54 op=LOAD Jan 16 18:00:01.463000 audit: BPF prog-id=55 op=LOAD Jan 16 18:00:01.464000 audit: BPF prog-id=7 op=UNLOAD Jan 16 18:00:01.464000 audit: BPF prog-id=8 op=UNLOAD Jan 16 18:00:01.467192 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 16 18:00:01.471034 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 16 18:00:01.479251 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 18:00:01.489869 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 16 18:00:01.493128 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 16 18:00:01.503001 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 16 18:00:01.505878 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 18:00:01.506150 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 16 18:00:01.506246 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 16 18:00:01.510401 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 18:00:01.510578 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 18:00:01.511831 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 16 18:00:01.511940 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 16 18:00:01.517017 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 16 18:00:01.525150 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 16 18:00:01.526918 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 16 18:00:01.527133 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 16 18:00:01.527230 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 16 18:00:01.527000 audit[1430]: SYSTEM_BOOT pid=1430 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 16 18:00:01.547000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:01.547714 systemd[1]: Finished ensure-sysext.service. Jan 16 18:00:01.554000 audit: BPF prog-id=56 op=LOAD Jan 16 18:00:01.559456 systemd[1]: Starting systemd-timesyncd.service - Network Time Synchronization... Jan 16 18:00:01.561150 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 16 18:00:01.561000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=clean-ca-certificates comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:01.564677 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 16 18:00:01.564000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:01.566860 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 16 18:00:01.572000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:01.572000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:01.571613 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 16 18:00:01.572257 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 16 18:00:01.586693 systemd-udevd[1426]: Using default interface naming scheme 'v257'. Jan 16 18:00:01.589095 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 16 18:00:01.589414 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 16 18:00:01.590000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:01.590000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:01.591929 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 16 18:00:01.595832 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 16 18:00:01.598000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:01.598000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:01.597734 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 16 18:00:01.599239 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 16 18:00:01.605280 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 16 18:00:01.605000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:01.613110 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 16 18:00:01.614213 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 16 18:00:01.615000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:01.615000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:01.638000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 16 18:00:01.638000 audit[1462]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe64c0290 a2=420 a3=0 items=0 ppid=1422 pid=1462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:01.638000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 16 18:00:01.640756 augenrules[1462]: No rules Jan 16 18:00:01.641546 systemd[1]: audit-rules.service: Deactivated successfully. Jan 16 18:00:01.642017 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 16 18:00:01.652848 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 16 18:00:01.659913 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 16 18:00:01.689721 systemd[1]: Started systemd-timesyncd.service - Network Time Synchronization. Jan 16 18:00:01.692289 systemd[1]: Reached target time-set.target - System Time Set. Jan 16 18:00:01.782467 systemd-networkd[1471]: lo: Link UP Jan 16 18:00:01.782482 systemd-networkd[1471]: lo: Gained carrier Jan 16 18:00:01.789224 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 16 18:00:01.790438 systemd[1]: Reached target network.target - Network. Jan 16 18:00:01.796236 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 16 18:00:01.802039 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 16 18:00:01.859766 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 16 18:00:01.923916 systemd-networkd[1471]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 18:00:01.923931 systemd-networkd[1471]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 18:00:01.927043 systemd-networkd[1471]: eth0: Link UP Jan 16 18:00:01.927788 systemd-networkd[1471]: eth0: Gained carrier Jan 16 18:00:01.927825 systemd-networkd[1471]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 18:00:01.956377 systemd-networkd[1471]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 18:00:01.956394 systemd-networkd[1471]: eth1: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 16 18:00:01.963133 systemd-networkd[1471]: eth1: Link UP Jan 16 18:00:01.964901 systemd-networkd[1471]: eth1: Gained carrier Jan 16 18:00:01.964941 systemd-networkd[1471]: eth1: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 16 18:00:01.975358 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 16 18:00:01.987763 systemd-networkd[1471]: eth0: DHCPv4 address 167.235.246.183/32, gateway 172.31.1.1 acquired from 172.31.1.1 Jan 16 18:00:01.989202 systemd-timesyncd[1445]: Network configuration changed, trying to establish connection. Jan 16 18:00:02.009768 systemd-networkd[1471]: eth1: DHCPv4 address 10.0.0.3/32 acquired from 10.0.0.1 Jan 16 18:00:02.011207 systemd-timesyncd[1445]: Network configuration changed, trying to establish connection. Jan 16 18:00:02.012831 systemd-timesyncd[1445]: Network configuration changed, trying to establish connection. Jan 16 18:00:02.025900 kernel: mousedev: PS/2 mouse device common for all mice Jan 16 18:00:02.124440 ldconfig[1424]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 16 18:00:02.132706 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 16 18:00:02.139991 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 16 18:00:02.181757 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 16 18:00:02.184095 systemd[1]: Reached target sysinit.target - System Initialization. Jan 16 18:00:02.185257 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 16 18:00:02.187983 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 16 18:00:02.189544 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 16 18:00:02.191248 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 16 18:00:02.193599 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 16 18:00:02.194872 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 16 18:00:02.196783 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 16 18:00:02.197692 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 16 18:00:02.197734 systemd[1]: Reached target paths.target - Path Units. Jan 16 18:00:02.198737 systemd[1]: Reached target timers.target - Timer Units. Jan 16 18:00:02.200850 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 16 18:00:02.205062 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 16 18:00:02.208109 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 16 18:00:02.209711 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 16 18:00:02.212072 kernel: [drm] pci: virtio-gpu-pci detected at 0000:00:01.0 Jan 16 18:00:02.212164 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 16 18:00:02.212178 kernel: [drm] features: -context_init Jan 16 18:00:02.212199 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 16 18:00:02.220581 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 16 18:00:02.222339 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 16 18:00:02.227494 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 16 18:00:02.231507 systemd[1]: Condition check resulted in dev-virtio\x2dports-org.qemu.guest_agent.0.device - /dev/virtio-ports/org.qemu.guest_agent.0 being skipped. Jan 16 18:00:02.241424 systemd[1]: Reached target sockets.target - Socket Units. Jan 16 18:00:02.242723 systemd[1]: Reached target basic.target - Basic System. Jan 16 18:00:02.243806 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 16 18:00:02.243847 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 16 18:00:02.260017 systemd[1]: Starting containerd.service - containerd container runtime... Jan 16 18:00:02.271051 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 16 18:00:02.276248 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 16 18:00:02.305692 kernel: [drm] number of scanouts: 1 Jan 16 18:00:02.305779 kernel: [drm] number of cap sets: 0 Jan 16 18:00:02.306086 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 16 18:00:02.310802 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:00:01.0 on minor 0 Jan 16 18:00:02.310963 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 16 18:00:02.314071 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 16 18:00:02.315717 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 16 18:00:02.318992 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 16 18:00:02.323919 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 16 18:00:02.329796 systemd[1]: Started qemu-guest-agent.service - QEMU Guest Agent. Jan 16 18:00:02.336011 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 16 18:00:02.343011 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 16 18:00:02.349590 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 16 18:00:02.351224 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 16 18:00:02.351866 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 16 18:00:02.359710 systemd[1]: Starting update-engine.service - Update Engine... Jan 16 18:00:02.364918 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 16 18:00:02.376687 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 16 18:00:02.380335 jq[1541]: false Jan 16 18:00:02.381996 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - QEMU_HARDDISK OEM. Jan 16 18:00:02.383826 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 16 18:00:02.384839 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 16 18:00:02.397535 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 16 18:00:02.429290 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 16 18:00:02.429692 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 16 18:00:02.432453 coreos-metadata[1535]: Jan 16 18:00:02.429 INFO Fetching http://169.254.169.254/hetzner/v1/metadata: Attempt #1 Jan 16 18:00:02.436101 systemd[1]: motdgen.service: Deactivated successfully. Jan 16 18:00:02.437738 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 16 18:00:02.441950 coreos-metadata[1535]: Jan 16 18:00:02.440 INFO Fetch successful Jan 16 18:00:02.441950 coreos-metadata[1535]: Jan 16 18:00:02.441 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/private-networks: Attempt #1 Jan 16 18:00:02.441950 coreos-metadata[1535]: Jan 16 18:00:02.441 INFO Fetch successful Jan 16 18:00:02.443503 extend-filesystems[1542]: Found /dev/sda6 Jan 16 18:00:02.473002 jq[1550]: true Jan 16 18:00:02.480451 extend-filesystems[1542]: Found /dev/sda9 Jan 16 18:00:02.505011 extend-filesystems[1542]: Checking size of /dev/sda9 Jan 16 18:00:02.533875 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 18:00:02.537586 dbus-daemon[1536]: [system] SELinux support is enabled Jan 16 18:00:02.538712 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 16 18:00:02.545303 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 16 18:00:02.545355 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 16 18:00:02.548863 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 16 18:00:02.548896 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 16 18:00:02.554905 kernel: Console: switching to colour frame buffer device 160x50 Jan 16 18:00:02.555493 tar[1567]: linux-arm64/LICENSE Jan 16 18:00:02.555904 tar[1567]: linux-arm64/helm Jan 16 18:00:02.568660 kernel: virtio-pci 0000:00:01.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 16 18:00:02.569313 jq[1584]: true Jan 16 18:00:02.607006 extend-filesystems[1542]: Resized partition /dev/sda9 Jan 16 18:00:02.614606 extend-filesystems[1596]: resize2fs 1.47.3 (8-Jul-2025) Jan 16 18:00:02.630420 update_engine[1549]: I20260116 18:00:02.630110 1549 main.cc:92] Flatcar Update Engine starting Jan 16 18:00:02.639395 systemd[1]: Started update-engine.service - Update Engine. Jan 16 18:00:02.640143 update_engine[1549]: I20260116 18:00:02.639763 1549 update_check_scheduler.cc:74] Next update check in 3m19s Jan 16 18:00:02.652645 kernel: EXT4-fs (sda9): resizing filesystem from 1617920 to 8410107 blocks Jan 16 18:00:02.672986 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 16 18:00:02.675332 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 16 18:00:02.702450 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 16 18:00:02.705666 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 16 18:00:02.784414 bash[1616]: Updated "/home/core/.ssh/authorized_keys" Jan 16 18:00:02.789242 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 16 18:00:02.795285 systemd[1]: Starting sshkeys.service... Jan 16 18:00:02.871849 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 16 18:00:02.874969 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 16 18:00:02.894650 kernel: EXT4-fs (sda9): resized filesystem to 8410107 Jan 16 18:00:02.913288 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 18:00:02.935832 extend-filesystems[1596]: Filesystem at /dev/sda9 is mounted on /; on-line resizing required Jan 16 18:00:02.935832 extend-filesystems[1596]: old_desc_blocks = 1, new_desc_blocks = 5 Jan 16 18:00:02.935832 extend-filesystems[1596]: The filesystem on /dev/sda9 is now 8410107 (4k) blocks long. Jan 16 18:00:02.951462 extend-filesystems[1542]: Resized filesystem in /dev/sda9 Jan 16 18:00:02.941411 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 16 18:00:02.943698 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 16 18:00:02.972544 coreos-metadata[1629]: Jan 16 18:00:02.972 INFO Fetching http://169.254.169.254/hetzner/v1/metadata/public-keys: Attempt #1 Jan 16 18:00:02.972544 coreos-metadata[1629]: Jan 16 18:00:02.972 INFO Fetch successful Jan 16 18:00:02.975391 unknown[1629]: wrote ssh authorized keys file for user: core Jan 16 18:00:02.987639 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 16 18:00:02.988209 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 18:00:02.991176 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 18:00:02.996086 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 16 18:00:03.027816 systemd-networkd[1471]: eth0: Gained IPv6LL Jan 16 18:00:03.029847 systemd-timesyncd[1445]: Network configuration changed, trying to establish connection. Jan 16 18:00:03.053691 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 16 18:00:03.055419 systemd[1]: Reached target network-online.target - Network is Online. Jan 16 18:00:03.062143 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 18:00:03.067794 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 16 18:00:03.082333 update-ssh-keys[1638]: Updated "/home/core/.ssh/authorized_keys" Jan 16 18:00:03.084726 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 16 18:00:03.094209 systemd[1]: Finished sshkeys.service. Jan 16 18:00:03.104185 containerd[1571]: time="2026-01-16T18:00:03Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 16 18:00:03.114540 containerd[1571]: time="2026-01-16T18:00:03.114474360Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 16 18:00:03.154997 containerd[1571]: time="2026-01-16T18:00:03.153173120Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="29µs" Jan 16 18:00:03.165747 containerd[1571]: time="2026-01-16T18:00:03.165434640Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 16 18:00:03.165747 containerd[1571]: time="2026-01-16T18:00:03.165537200Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 16 18:00:03.165747 containerd[1571]: time="2026-01-16T18:00:03.165555520Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 16 18:00:03.165923 containerd[1571]: time="2026-01-16T18:00:03.165801520Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 16 18:00:03.165923 containerd[1571]: time="2026-01-16T18:00:03.165841160Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 16 18:00:03.165923 containerd[1571]: time="2026-01-16T18:00:03.165910600Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 16 18:00:03.165979 containerd[1571]: time="2026-01-16T18:00:03.165929080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 16 18:00:03.171206 containerd[1571]: time="2026-01-16T18:00:03.171150680Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 16 18:00:03.171206 containerd[1571]: time="2026-01-16T18:00:03.171195720Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 16 18:00:03.171325 containerd[1571]: time="2026-01-16T18:00:03.171226000Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 16 18:00:03.171325 containerd[1571]: time="2026-01-16T18:00:03.171236760Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 16 18:00:03.172051 containerd[1571]: time="2026-01-16T18:00:03.171479160Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 16 18:00:03.172051 containerd[1571]: time="2026-01-16T18:00:03.171506120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 16 18:00:03.178951 systemd-logind[1548]: New seat seat0. Jan 16 18:00:03.179361 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 16 18:00:03.184114 containerd[1571]: time="2026-01-16T18:00:03.171616240Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 16 18:00:03.184114 containerd[1571]: time="2026-01-16T18:00:03.180647880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 16 18:00:03.184114 containerd[1571]: time="2026-01-16T18:00:03.180699480Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 16 18:00:03.184114 containerd[1571]: time="2026-01-16T18:00:03.180712800Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 16 18:00:03.184114 containerd[1571]: time="2026-01-16T18:00:03.180752200Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 16 18:00:03.184114 containerd[1571]: time="2026-01-16T18:00:03.181031120Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 16 18:00:03.184114 containerd[1571]: time="2026-01-16T18:00:03.181105680Z" level=info msg="metadata content store policy set" policy=shared Jan 16 18:00:03.187790 systemd-logind[1548]: Watching system buttons on /dev/input/event0 (Power Button) Jan 16 18:00:03.187827 systemd-logind[1548]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 16 18:00:03.188277 systemd[1]: Started systemd-logind.service - User Login Management. Jan 16 18:00:03.200692 containerd[1571]: time="2026-01-16T18:00:03.200613520Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 16 18:00:03.207709 containerd[1571]: time="2026-01-16T18:00:03.201745360Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 16 18:00:03.207709 containerd[1571]: time="2026-01-16T18:00:03.201919920Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 16 18:00:03.207709 containerd[1571]: time="2026-01-16T18:00:03.201935760Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 16 18:00:03.207709 containerd[1571]: time="2026-01-16T18:00:03.201952120Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 16 18:00:03.207709 containerd[1571]: time="2026-01-16T18:00:03.201968360Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 16 18:00:03.207709 containerd[1571]: time="2026-01-16T18:00:03.201982600Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 16 18:00:03.207709 containerd[1571]: time="2026-01-16T18:00:03.201994000Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 16 18:00:03.207709 containerd[1571]: time="2026-01-16T18:00:03.202009880Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 16 18:00:03.207709 containerd[1571]: time="2026-01-16T18:00:03.202026160Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 16 18:00:03.207709 containerd[1571]: time="2026-01-16T18:00:03.202039960Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 16 18:00:03.207709 containerd[1571]: time="2026-01-16T18:00:03.202051880Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 16 18:00:03.207709 containerd[1571]: time="2026-01-16T18:00:03.202065840Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 16 18:00:03.207709 containerd[1571]: time="2026-01-16T18:00:03.202081440Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 16 18:00:03.208005 containerd[1571]: time="2026-01-16T18:00:03.202255960Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 16 18:00:03.208005 containerd[1571]: time="2026-01-16T18:00:03.202281600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 16 18:00:03.208005 containerd[1571]: time="2026-01-16T18:00:03.202296760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 16 18:00:03.208005 containerd[1571]: time="2026-01-16T18:00:03.202309760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 16 18:00:03.208005 containerd[1571]: time="2026-01-16T18:00:03.202323760Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 16 18:00:03.208005 containerd[1571]: time="2026-01-16T18:00:03.202336640Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 16 18:00:03.208005 containerd[1571]: time="2026-01-16T18:00:03.202348800Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 16 18:00:03.208005 containerd[1571]: time="2026-01-16T18:00:03.202361240Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 16 18:00:03.208005 containerd[1571]: time="2026-01-16T18:00:03.202376080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 16 18:00:03.208005 containerd[1571]: time="2026-01-16T18:00:03.202392120Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 16 18:00:03.208005 containerd[1571]: time="2026-01-16T18:00:03.202403440Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 16 18:00:03.208005 containerd[1571]: time="2026-01-16T18:00:03.202435640Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 16 18:00:03.208005 containerd[1571]: time="2026-01-16T18:00:03.202481720Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 16 18:00:03.208005 containerd[1571]: time="2026-01-16T18:00:03.202498480Z" level=info msg="Start snapshots syncer" Jan 16 18:00:03.210173 containerd[1571]: time="2026-01-16T18:00:03.209695520Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 16 18:00:03.211446 containerd[1571]: time="2026-01-16T18:00:03.211374240Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 16 18:00:03.212476 containerd[1571]: time="2026-01-16T18:00:03.211730080Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 16 18:00:03.213942 containerd[1571]: time="2026-01-16T18:00:03.213668000Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 16 18:00:03.221013 containerd[1571]: time="2026-01-16T18:00:03.220075120Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 16 18:00:03.221013 containerd[1571]: time="2026-01-16T18:00:03.220164400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 16 18:00:03.221013 containerd[1571]: time="2026-01-16T18:00:03.220191680Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 16 18:00:03.221013 containerd[1571]: time="2026-01-16T18:00:03.220204320Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 16 18:00:03.221013 containerd[1571]: time="2026-01-16T18:00:03.220228040Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 16 18:00:03.221013 containerd[1571]: time="2026-01-16T18:00:03.220239560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 16 18:00:03.221013 containerd[1571]: time="2026-01-16T18:00:03.220259600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 16 18:00:03.221013 containerd[1571]: time="2026-01-16T18:00:03.220273400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 16 18:00:03.221013 containerd[1571]: time="2026-01-16T18:00:03.220292440Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 16 18:00:03.221013 containerd[1571]: time="2026-01-16T18:00:03.220361360Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 16 18:00:03.221013 containerd[1571]: time="2026-01-16T18:00:03.220381240Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 16 18:00:03.221013 containerd[1571]: time="2026-01-16T18:00:03.220391000Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 16 18:00:03.221013 containerd[1571]: time="2026-01-16T18:00:03.220574640Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 16 18:00:03.221013 containerd[1571]: time="2026-01-16T18:00:03.220592880Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 16 18:00:03.221371 containerd[1571]: time="2026-01-16T18:00:03.220666880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 16 18:00:03.221371 containerd[1571]: time="2026-01-16T18:00:03.220685160Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 16 18:00:03.221371 containerd[1571]: time="2026-01-16T18:00:03.220781120Z" level=info msg="runtime interface created" Jan 16 18:00:03.221371 containerd[1571]: time="2026-01-16T18:00:03.220789200Z" level=info msg="created NRI interface" Jan 16 18:00:03.221371 containerd[1571]: time="2026-01-16T18:00:03.220798880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 16 18:00:03.221371 containerd[1571]: time="2026-01-16T18:00:03.220814480Z" level=info msg="Connect containerd service" Jan 16 18:00:03.221371 containerd[1571]: time="2026-01-16T18:00:03.220841920Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 16 18:00:03.222346 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 16 18:00:03.227722 containerd[1571]: time="2026-01-16T18:00:03.227027280Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 18:00:03.347863 systemd-networkd[1471]: eth1: Gained IPv6LL Jan 16 18:00:03.348408 systemd-timesyncd[1445]: Network configuration changed, trying to establish connection. Jan 16 18:00:03.439501 locksmithd[1599]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 16 18:00:03.534447 containerd[1571]: time="2026-01-16T18:00:03.533983440Z" level=info msg="Start subscribing containerd event" Jan 16 18:00:03.534447 containerd[1571]: time="2026-01-16T18:00:03.534091320Z" level=info msg="Start recovering state" Jan 16 18:00:03.534447 containerd[1571]: time="2026-01-16T18:00:03.534228520Z" level=info msg="Start event monitor" Jan 16 18:00:03.534447 containerd[1571]: time="2026-01-16T18:00:03.534243840Z" level=info msg="Start cni network conf syncer for default" Jan 16 18:00:03.534447 containerd[1571]: time="2026-01-16T18:00:03.534251000Z" level=info msg="Start streaming server" Jan 16 18:00:03.534447 containerd[1571]: time="2026-01-16T18:00:03.534259560Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 16 18:00:03.534447 containerd[1571]: time="2026-01-16T18:00:03.534267440Z" level=info msg="runtime interface starting up..." Jan 16 18:00:03.534447 containerd[1571]: time="2026-01-16T18:00:03.534273240Z" level=info msg="starting plugins..." Jan 16 18:00:03.534447 containerd[1571]: time="2026-01-16T18:00:03.534285960Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 16 18:00:03.536046 containerd[1571]: time="2026-01-16T18:00:03.535750600Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 16 18:00:03.536046 containerd[1571]: time="2026-01-16T18:00:03.535848440Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 16 18:00:03.536046 containerd[1571]: time="2026-01-16T18:00:03.536017040Z" level=info msg="containerd successfully booted in 0.432309s" Jan 16 18:00:03.536221 systemd[1]: Started containerd.service - containerd container runtime. Jan 16 18:00:03.776661 tar[1567]: linux-arm64/README.md Jan 16 18:00:03.808784 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 16 18:00:03.850936 sshd_keygen[1577]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 16 18:00:03.884374 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 16 18:00:03.889039 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 16 18:00:03.913099 systemd[1]: issuegen.service: Deactivated successfully. Jan 16 18:00:03.913803 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 16 18:00:03.920924 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 16 18:00:03.947839 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 16 18:00:03.959265 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 16 18:00:03.965014 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 16 18:00:03.967917 systemd[1]: Reached target getty.target - Login Prompts. Jan 16 18:00:04.209944 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 18:00:04.213485 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 16 18:00:04.217901 systemd[1]: Startup finished in 1.883s (kernel) + 5.216s (initrd) + 4.873s (userspace) = 11.974s. Jan 16 18:00:04.240311 (kubelet)[1704]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 18:00:04.753356 kubelet[1704]: E0116 18:00:04.753259 1704 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 18:00:04.758599 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 18:00:04.758853 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 18:00:04.759651 systemd[1]: kubelet.service: Consumed 896ms CPU time, 255.4M memory peak. Jan 16 18:00:15.008548 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 16 18:00:15.012196 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 18:00:15.220075 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 18:00:15.250544 (kubelet)[1723]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 18:00:15.301310 kubelet[1723]: E0116 18:00:15.301184 1723 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 18:00:15.304799 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 18:00:15.305077 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 18:00:15.305816 systemd[1]: kubelet.service: Consumed 203ms CPU time, 107.4M memory peak. Jan 16 18:00:25.558787 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 16 18:00:25.565352 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 18:00:25.731441 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 18:00:25.744718 (kubelet)[1737]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 18:00:25.798066 kubelet[1737]: E0116 18:00:25.798012 1737 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 18:00:25.801159 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 18:00:25.801431 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 18:00:25.802573 systemd[1]: kubelet.service: Consumed 190ms CPU time, 105.8M memory peak. Jan 16 18:00:30.965226 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 16 18:00:30.968119 systemd[1]: Started sshd@0-167.235.246.183:22-68.220.241.50:33268.service - OpenSSH per-connection server daemon (68.220.241.50:33268). Jan 16 18:00:31.544671 sshd[1745]: Accepted publickey for core from 68.220.241.50 port 33268 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:00:31.548791 sshd-session[1745]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:00:31.559683 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 16 18:00:31.561719 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 16 18:00:31.570792 systemd-logind[1548]: New session 1 of user core. Jan 16 18:00:31.585947 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 16 18:00:31.592183 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 16 18:00:31.612640 (systemd)[1751]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:00:31.618360 systemd-logind[1548]: New session 2 of user core. Jan 16 18:00:31.759285 systemd[1751]: Queued start job for default target default.target. Jan 16 18:00:31.772794 systemd[1751]: Created slice app.slice - User Application Slice. Jan 16 18:00:31.772873 systemd[1751]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 16 18:00:31.772901 systemd[1751]: Reached target paths.target - Paths. Jan 16 18:00:31.772995 systemd[1751]: Reached target timers.target - Timers. Jan 16 18:00:31.776055 systemd[1751]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 16 18:00:31.779866 systemd[1751]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 16 18:00:31.799954 systemd[1751]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 16 18:00:31.800070 systemd[1751]: Reached target sockets.target - Sockets. Jan 16 18:00:31.804738 systemd[1751]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 16 18:00:31.804970 systemd[1751]: Reached target basic.target - Basic System. Jan 16 18:00:31.805098 systemd[1751]: Reached target default.target - Main User Target. Jan 16 18:00:31.805145 systemd[1751]: Startup finished in 178ms. Jan 16 18:00:31.805609 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 16 18:00:31.812986 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 16 18:00:32.137713 systemd[1]: Started sshd@1-167.235.246.183:22-68.220.241.50:33280.service - OpenSSH per-connection server daemon (68.220.241.50:33280). Jan 16 18:00:32.702703 sshd[1765]: Accepted publickey for core from 68.220.241.50 port 33280 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:00:32.705285 sshd-session[1765]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:00:32.711389 systemd-logind[1548]: New session 3 of user core. Jan 16 18:00:32.723083 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 16 18:00:33.009934 sshd[1769]: Connection closed by 68.220.241.50 port 33280 Jan 16 18:00:33.011035 sshd-session[1765]: pam_unix(sshd:session): session closed for user core Jan 16 18:00:33.020039 systemd[1]: sshd@1-167.235.246.183:22-68.220.241.50:33280.service: Deactivated successfully. Jan 16 18:00:33.022587 systemd[1]: session-3.scope: Deactivated successfully. Jan 16 18:00:33.024502 systemd-logind[1548]: Session 3 logged out. Waiting for processes to exit. Jan 16 18:00:33.027300 systemd-logind[1548]: Removed session 3. Jan 16 18:00:33.129923 systemd[1]: Started sshd@2-167.235.246.183:22-68.220.241.50:54008.service - OpenSSH per-connection server daemon (68.220.241.50:54008). Jan 16 18:00:33.608082 systemd-timesyncd[1445]: Contacted time server 188.68.34.173:123 (2.flatcar.pool.ntp.org). Jan 16 18:00:33.608185 systemd-timesyncd[1445]: Initial clock synchronization to Fri 2026-01-16 18:00:33.600187 UTC. Jan 16 18:00:33.694949 sshd[1775]: Accepted publickey for core from 68.220.241.50 port 54008 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:00:33.696602 sshd-session[1775]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:00:33.703363 systemd-logind[1548]: New session 4 of user core. Jan 16 18:00:33.708902 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 16 18:00:33.996798 sshd[1779]: Connection closed by 68.220.241.50 port 54008 Jan 16 18:00:33.996413 sshd-session[1775]: pam_unix(sshd:session): session closed for user core Jan 16 18:00:34.003393 systemd[1]: sshd@2-167.235.246.183:22-68.220.241.50:54008.service: Deactivated successfully. Jan 16 18:00:34.007449 systemd[1]: session-4.scope: Deactivated successfully. Jan 16 18:00:34.008707 systemd-logind[1548]: Session 4 logged out. Waiting for processes to exit. Jan 16 18:00:34.011284 systemd-logind[1548]: Removed session 4. Jan 16 18:00:34.114438 systemd[1]: Started sshd@3-167.235.246.183:22-68.220.241.50:54010.service - OpenSSH per-connection server daemon (68.220.241.50:54010). Jan 16 18:00:34.670383 sshd[1785]: Accepted publickey for core from 68.220.241.50 port 54010 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:00:34.674483 sshd-session[1785]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:00:34.687649 systemd-logind[1548]: New session 5 of user core. Jan 16 18:00:34.700856 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 16 18:00:34.989906 sshd[1789]: Connection closed by 68.220.241.50 port 54010 Jan 16 18:00:34.984789 sshd-session[1785]: pam_unix(sshd:session): session closed for user core Jan 16 18:00:34.996306 systemd[1]: sshd@3-167.235.246.183:22-68.220.241.50:54010.service: Deactivated successfully. Jan 16 18:00:34.999152 systemd[1]: session-5.scope: Deactivated successfully. Jan 16 18:00:35.001938 systemd-logind[1548]: Session 5 logged out. Waiting for processes to exit. Jan 16 18:00:35.004104 systemd-logind[1548]: Removed session 5. Jan 16 18:00:35.092347 systemd[1]: Started sshd@4-167.235.246.183:22-68.220.241.50:54020.service - OpenSSH per-connection server daemon (68.220.241.50:54020). Jan 16 18:00:35.647321 sshd[1795]: Accepted publickey for core from 68.220.241.50 port 54020 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:00:35.649015 sshd-session[1795]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:00:35.655043 systemd-logind[1548]: New session 6 of user core. Jan 16 18:00:35.667456 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 16 18:00:35.857917 sudo[1800]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 16 18:00:35.858282 sudo[1800]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 18:00:35.869197 sudo[1800]: pam_unix(sudo:session): session closed for user root Jan 16 18:00:35.966782 sshd[1799]: Connection closed by 68.220.241.50 port 54020 Jan 16 18:00:35.968372 sshd-session[1795]: pam_unix(sshd:session): session closed for user core Jan 16 18:00:35.976154 systemd[1]: sshd@4-167.235.246.183:22-68.220.241.50:54020.service: Deactivated successfully. Jan 16 18:00:35.979075 systemd[1]: session-6.scope: Deactivated successfully. Jan 16 18:00:35.980484 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 16 18:00:35.982050 systemd-logind[1548]: Session 6 logged out. Waiting for processes to exit. Jan 16 18:00:35.984776 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 18:00:35.986231 systemd-logind[1548]: Removed session 6. Jan 16 18:00:36.083011 systemd[1]: Started sshd@5-167.235.246.183:22-68.220.241.50:54026.service - OpenSSH per-connection server daemon (68.220.241.50:54026). Jan 16 18:00:36.170036 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 18:00:36.189126 (kubelet)[1818]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 18:00:36.246901 kubelet[1818]: E0116 18:00:36.246744 1818 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 18:00:36.251040 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 18:00:36.251348 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 18:00:36.253817 systemd[1]: kubelet.service: Consumed 194ms CPU time, 107.6M memory peak. Jan 16 18:00:36.650176 sshd[1810]: Accepted publickey for core from 68.220.241.50 port 54026 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:00:36.652099 sshd-session[1810]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:00:36.661283 systemd-logind[1548]: New session 7 of user core. Jan 16 18:00:36.669188 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 16 18:00:36.862592 sudo[1829]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 16 18:00:36.862974 sudo[1829]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 18:00:36.868326 sudo[1829]: pam_unix(sudo:session): session closed for user root Jan 16 18:00:36.880080 sudo[1828]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 16 18:00:36.881082 sudo[1828]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 18:00:36.892061 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 16 18:00:36.953468 kernel: kauditd_printk_skb: 183 callbacks suppressed Jan 16 18:00:36.953607 kernel: audit: type=1305 audit(1768586436.949:225): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 16 18:00:36.949000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 16 18:00:36.953776 augenrules[1853]: No rules Jan 16 18:00:36.949000 audit[1853]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc05ddde0 a2=420 a3=0 items=0 ppid=1834 pid=1853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:36.957214 kernel: audit: type=1300 audit(1768586436.949:225): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffc05ddde0 a2=420 a3=0 items=0 ppid=1834 pid=1853 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:36.954289 systemd[1]: audit-rules.service: Deactivated successfully. Jan 16 18:00:36.954753 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 16 18:00:36.957700 sudo[1828]: pam_unix(sudo:session): session closed for user root Jan 16 18:00:36.960793 kernel: audit: type=1327 audit(1768586436.949:225): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 16 18:00:36.960908 kernel: audit: type=1130 audit(1768586436.953:226): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:36.949000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 16 18:00:36.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:36.961123 kernel: audit: type=1131 audit(1768586436.953:227): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:36.953000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:36.962595 kernel: audit: type=1106 audit(1768586436.956:228): pid=1828 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 18:00:36.956000 audit[1828]: USER_END pid=1828 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 18:00:36.956000 audit[1828]: CRED_DISP pid=1828 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 18:00:36.965856 kernel: audit: type=1104 audit(1768586436.956:229): pid=1828 uid=500 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 18:00:37.059768 sshd[1827]: Connection closed by 68.220.241.50 port 54026 Jan 16 18:00:37.060836 sshd-session[1810]: pam_unix(sshd:session): session closed for user core Jan 16 18:00:37.064000 audit[1810]: USER_END pid=1810 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:00:37.070646 kernel: audit: type=1106 audit(1768586437.064:230): pid=1810 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:00:37.065000 audit[1810]: CRED_DISP pid=1810 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:00:37.072066 systemd[1]: sshd@5-167.235.246.183:22-68.220.241.50:54026.service: Deactivated successfully. Jan 16 18:00:37.071000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-167.235.246.183:22-68.220.241.50:54026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:37.075255 kernel: audit: type=1104 audit(1768586437.065:231): pid=1810 uid=0 auid=500 ses=7 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:00:37.075352 kernel: audit: type=1131 audit(1768586437.071:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@5-167.235.246.183:22-68.220.241.50:54026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:37.076294 systemd[1]: session-7.scope: Deactivated successfully. Jan 16 18:00:37.079052 systemd-logind[1548]: Session 7 logged out. Waiting for processes to exit. Jan 16 18:00:37.082784 systemd-logind[1548]: Removed session 7. Jan 16 18:00:37.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-167.235.246.183:22-68.220.241.50:54040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:37.173310 systemd[1]: Started sshd@6-167.235.246.183:22-68.220.241.50:54040.service - OpenSSH per-connection server daemon (68.220.241.50:54040). Jan 16 18:00:37.739000 audit[1862]: USER_ACCT pid=1862 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:00:37.740615 sshd[1862]: Accepted publickey for core from 68.220.241.50 port 54040 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:00:37.741000 audit[1862]: CRED_ACQ pid=1862 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:00:37.741000 audit[1862]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe025c660 a2=3 a3=0 items=0 ppid=1 pid=1862 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=8 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:37.741000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:00:37.743007 sshd-session[1862]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:00:37.749705 systemd-logind[1548]: New session 8 of user core. Jan 16 18:00:37.757071 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 16 18:00:37.760000 audit[1862]: USER_START pid=1862 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:00:37.762000 audit[1866]: CRED_ACQ pid=1866 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:00:37.949527 sudo[1867]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 16 18:00:37.948000 audit[1867]: USER_ACCT pid=1867 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 18:00:37.949000 audit[1867]: CRED_REFR pid=1867 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 18:00:37.950555 sudo[1867]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 16 18:00:37.950000 audit[1867]: USER_START pid=1867 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 18:00:38.315164 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 16 18:00:38.330652 (dockerd)[1885]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 16 18:00:38.603547 dockerd[1885]: time="2026-01-16T18:00:38.603061633Z" level=info msg="Starting up" Jan 16 18:00:38.606266 dockerd[1885]: time="2026-01-16T18:00:38.606216582Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 16 18:00:38.626307 dockerd[1885]: time="2026-01-16T18:00:38.626256264Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 16 18:00:38.653366 systemd[1]: var-lib-docker-check\x2doverlayfs\x2dsupport1047836932-merged.mount: Deactivated successfully. Jan 16 18:00:38.664196 systemd[1]: var-lib-docker-metacopy\x2dcheck516994610-merged.mount: Deactivated successfully. Jan 16 18:00:38.681552 dockerd[1885]: time="2026-01-16T18:00:38.681247136Z" level=info msg="Loading containers: start." Jan 16 18:00:38.696660 kernel: Initializing XFRM netlink socket Jan 16 18:00:38.763000 audit[1933]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=1933 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:00:38.763000 audit[1933]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffd7a5ec10 a2=0 a3=0 items=0 ppid=1885 pid=1933 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.763000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 16 18:00:38.765000 audit[1935]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=1935 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:00:38.765000 audit[1935]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffe42c5d60 a2=0 a3=0 items=0 ppid=1885 pid=1935 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.765000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 16 18:00:38.769000 audit[1937]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=1937 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:00:38.769000 audit[1937]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd0b77bd0 a2=0 a3=0 items=0 ppid=1885 pid=1937 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.769000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 16 18:00:38.772000 audit[1939]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=1939 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:00:38.772000 audit[1939]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc18db040 a2=0 a3=0 items=0 ppid=1885 pid=1939 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.772000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 16 18:00:38.774000 audit[1941]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=1941 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:00:38.774000 audit[1941]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc81427c0 a2=0 a3=0 items=0 ppid=1885 pid=1941 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.774000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 16 18:00:38.776000 audit[1943]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=1943 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:00:38.776000 audit[1943]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffff978bb0 a2=0 a3=0 items=0 ppid=1885 pid=1943 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.776000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 16 18:00:38.779000 audit[1945]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=1945 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:00:38.779000 audit[1945]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe3911680 a2=0 a3=0 items=0 ppid=1885 pid=1945 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.779000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 16 18:00:38.782000 audit[1947]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=1947 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:00:38.782000 audit[1947]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffd9bb55e0 a2=0 a3=0 items=0 ppid=1885 pid=1947 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.782000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 16 18:00:38.812000 audit[1950]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=1950 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:00:38.812000 audit[1950]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffc4fb63d0 a2=0 a3=0 items=0 ppid=1885 pid=1950 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.812000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 16 18:00:38.815000 audit[1952]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=1952 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:00:38.815000 audit[1952]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc8d9b210 a2=0 a3=0 items=0 ppid=1885 pid=1952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.815000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 16 18:00:38.818000 audit[1954]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=1954 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:00:38.818000 audit[1954]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffea68f2a0 a2=0 a3=0 items=0 ppid=1885 pid=1954 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.818000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 16 18:00:38.820000 audit[1956]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=1956 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:00:38.820000 audit[1956]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=fffff8389830 a2=0 a3=0 items=0 ppid=1885 pid=1956 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.820000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 16 18:00:38.823000 audit[1958]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=1958 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:00:38.823000 audit[1958]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffecc08d10 a2=0 a3=0 items=0 ppid=1885 pid=1958 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.823000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 16 18:00:38.866000 audit[1988]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=1988 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:00:38.866000 audit[1988]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffecbf9140 a2=0 a3=0 items=0 ppid=1885 pid=1988 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.866000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 16 18:00:38.869000 audit[1990]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=1990 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:00:38.869000 audit[1990]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=fffff70c0970 a2=0 a3=0 items=0 ppid=1885 pid=1990 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.869000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 16 18:00:38.873000 audit[1992]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=1992 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:00:38.873000 audit[1992]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcbdb1110 a2=0 a3=0 items=0 ppid=1885 pid=1992 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.873000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 16 18:00:38.875000 audit[1994]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=1994 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:00:38.875000 audit[1994]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcbb7cad0 a2=0 a3=0 items=0 ppid=1885 pid=1994 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.875000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 16 18:00:38.878000 audit[1996]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=1996 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:00:38.878000 audit[1996]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffc2ca1f20 a2=0 a3=0 items=0 ppid=1885 pid=1996 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.878000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 16 18:00:38.881000 audit[1998]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=1998 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:00:38.881000 audit[1998]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffe7d76c60 a2=0 a3=0 items=0 ppid=1885 pid=1998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.881000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 16 18:00:38.884000 audit[2000]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2000 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:00:38.884000 audit[2000]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff4e9aec0 a2=0 a3=0 items=0 ppid=1885 pid=2000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.884000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 16 18:00:38.886000 audit[2002]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2002 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:00:38.886000 audit[2002]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffcfe03aa0 a2=0 a3=0 items=0 ppid=1885 pid=2002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.886000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 16 18:00:38.890000 audit[2004]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2004 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:00:38.890000 audit[2004]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffc3d2ec10 a2=0 a3=0 items=0 ppid=1885 pid=2004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.890000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 16 18:00:38.892000 audit[2006]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2006 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:00:38.892000 audit[2006]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffccd2d990 a2=0 a3=0 items=0 ppid=1885 pid=2006 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.892000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 16 18:00:38.895000 audit[2008]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2008 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:00:38.895000 audit[2008]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffd375ae70 a2=0 a3=0 items=0 ppid=1885 pid=2008 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.895000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 16 18:00:38.898000 audit[2010]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2010 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:00:38.898000 audit[2010]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffec513920 a2=0 a3=0 items=0 ppid=1885 pid=2010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.898000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 16 18:00:38.901000 audit[2012]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2012 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:00:38.901000 audit[2012]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffde6d0ee0 a2=0 a3=0 items=0 ppid=1885 pid=2012 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.901000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 16 18:00:38.910000 audit[2017]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2017 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:00:38.910000 audit[2017]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff08cf200 a2=0 a3=0 items=0 ppid=1885 pid=2017 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.910000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 16 18:00:38.914000 audit[2019]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:00:38.914000 audit[2019]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffc1c7ae10 a2=0 a3=0 items=0 ppid=1885 pid=2019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.914000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 16 18:00:38.917000 audit[2021]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2021 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:00:38.917000 audit[2021]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffff0421f20 a2=0 a3=0 items=0 ppid=1885 pid=2021 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.917000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 16 18:00:38.921000 audit[2023]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2023 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:00:38.921000 audit[2023]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffcbd595d0 a2=0 a3=0 items=0 ppid=1885 pid=2023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.921000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 16 18:00:38.924000 audit[2025]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2025 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:00:38.924000 audit[2025]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffda3b9a30 a2=0 a3=0 items=0 ppid=1885 pid=2025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.924000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 16 18:00:38.926000 audit[2027]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2027 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:00:38.926000 audit[2027]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=fffffd4ff070 a2=0 a3=0 items=0 ppid=1885 pid=2027 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.926000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 16 18:00:38.953000 audit[2034]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:00:38.953000 audit[2034]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=fffff89dd650 a2=0 a3=0 items=0 ppid=1885 pid=2034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.953000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 16 18:00:38.957000 audit[2036]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2036 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:00:38.957000 audit[2036]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffcc054550 a2=0 a3=0 items=0 ppid=1885 pid=2036 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.957000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 16 18:00:38.969000 audit[2044]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:00:38.969000 audit[2044]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=fffff3bcfc80 a2=0 a3=0 items=0 ppid=1885 pid=2044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.969000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 16 18:00:38.982000 audit[2050]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2050 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:00:38.982000 audit[2050]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=fffff0844270 a2=0 a3=0 items=0 ppid=1885 pid=2050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.982000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 16 18:00:38.986000 audit[2052]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:00:38.986000 audit[2052]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=ffffefc43230 a2=0 a3=0 items=0 ppid=1885 pid=2052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.986000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 16 18:00:38.989000 audit[2054]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2054 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:00:38.989000 audit[2054]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffc101e3b0 a2=0 a3=0 items=0 ppid=1885 pid=2054 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.989000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 16 18:00:38.992000 audit[2056]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2056 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:00:38.992000 audit[2056]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffda27c180 a2=0 a3=0 items=0 ppid=1885 pid=2056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.992000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 16 18:00:38.995000 audit[2058]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2058 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:00:38.995000 audit[2058]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffce831a50 a2=0 a3=0 items=0 ppid=1885 pid=2058 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:38.995000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 16 18:00:38.997928 systemd-networkd[1471]: docker0: Link UP Jan 16 18:00:39.005524 dockerd[1885]: time="2026-01-16T18:00:39.004740150Z" level=info msg="Loading containers: done." Jan 16 18:00:39.032191 dockerd[1885]: time="2026-01-16T18:00:39.032132459Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 16 18:00:39.032554 dockerd[1885]: time="2026-01-16T18:00:39.032527642Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 16 18:00:39.032865 dockerd[1885]: time="2026-01-16T18:00:39.032842852Z" level=info msg="Initializing buildkit" Jan 16 18:00:39.072402 dockerd[1885]: time="2026-01-16T18:00:39.072347067Z" level=info msg="Completed buildkit initialization" Jan 16 18:00:39.090247 dockerd[1885]: time="2026-01-16T18:00:39.090197377Z" level=info msg="Daemon has completed initialization" Jan 16 18:00:39.090698 dockerd[1885]: time="2026-01-16T18:00:39.090406184Z" level=info msg="API listen on /run/docker.sock" Jan 16 18:00:39.090608 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 16 18:00:39.089000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:39.645309 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck3707352646-merged.mount: Deactivated successfully. Jan 16 18:00:40.183353 containerd[1571]: time="2026-01-16T18:00:40.183306508Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\"" Jan 16 18:00:40.809804 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3927047583.mount: Deactivated successfully. Jan 16 18:00:41.766682 containerd[1571]: time="2026-01-16T18:00:41.766595820Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:00:41.768452 containerd[1571]: time="2026-01-16T18:00:41.768338727Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.32.11: active requests=0, bytes read=24845960" Jan 16 18:00:41.769768 containerd[1571]: time="2026-01-16T18:00:41.769411199Z" level=info msg="ImageCreate event name:\"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:00:41.772865 containerd[1571]: time="2026-01-16T18:00:41.772803642Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:00:41.774034 containerd[1571]: time="2026-01-16T18:00:41.773863957Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.32.11\" with image id \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\", repo tag \"registry.k8s.io/kube-apiserver:v1.32.11\", repo digest \"registry.k8s.io/kube-apiserver@sha256:41eaecaed9af0ca8ab36d7794819c7df199e68c6c6ee0649114d713c495f8bd5\", size \"26438581\" in 1.590510864s" Jan 16 18:00:41.774034 containerd[1571]: time="2026-01-16T18:00:41.773905625Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.32.11\" returns image reference \"sha256:58951ea1a0b5de44646ea292c94b9350f33f22d147fccfd84bdc405eaabc442c\"" Jan 16 18:00:41.774893 containerd[1571]: time="2026-01-16T18:00:41.774860613Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\"" Jan 16 18:00:43.293759 containerd[1571]: time="2026-01-16T18:00:43.293673738Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:00:43.295711 containerd[1571]: time="2026-01-16T18:00:43.295660285Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.32.11: active requests=0, bytes read=22613932" Jan 16 18:00:43.298099 containerd[1571]: time="2026-01-16T18:00:43.297485674Z" level=info msg="ImageCreate event name:\"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:00:43.302539 containerd[1571]: time="2026-01-16T18:00:43.302467855Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:00:43.304971 containerd[1571]: time="2026-01-16T18:00:43.304820303Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.32.11\" with image id \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\", repo tag \"registry.k8s.io/kube-controller-manager:v1.32.11\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:ce7b2ead5eef1a1554ef28b2b79596c6a8c6d506a87a7ab1381e77fe3d72f55f\", size \"24206567\" in 1.529920982s" Jan 16 18:00:43.304971 containerd[1571]: time="2026-01-16T18:00:43.304865731Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.32.11\" returns image reference \"sha256:82766e5f2d560b930b7069c03ec1366dc8fdb4a490c3005266d2fdc4ca21c2fc\"" Jan 16 18:00:43.305797 containerd[1571]: time="2026-01-16T18:00:43.305719581Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\"" Jan 16 18:00:44.383213 containerd[1571]: time="2026-01-16T18:00:44.383026486Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:00:44.386040 containerd[1571]: time="2026-01-16T18:00:44.385997178Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.32.11: active requests=0, bytes read=17608611" Jan 16 18:00:44.387047 containerd[1571]: time="2026-01-16T18:00:44.387014001Z" level=info msg="ImageCreate event name:\"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:00:44.391331 containerd[1571]: time="2026-01-16T18:00:44.391289444Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:00:44.393569 containerd[1571]: time="2026-01-16T18:00:44.393530800Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.32.11\" with image id \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\", repo tag \"registry.k8s.io/kube-scheduler:v1.32.11\", repo digest \"registry.k8s.io/kube-scheduler@sha256:b3039587bbe70e61a6aeaff56c21fdeeef104524a31f835bcc80887d40b8e6b2\", size \"19201246\" in 1.087778667s" Jan 16 18:00:44.393767 containerd[1571]: time="2026-01-16T18:00:44.393749385Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.32.11\" returns image reference \"sha256:cfa17ff3d66343f03eadbc235264b0615de49cc1f43da12cddba27d80c61f2c6\"" Jan 16 18:00:44.394305 containerd[1571]: time="2026-01-16T18:00:44.394285050Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\"" Jan 16 18:00:45.425720 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4012677624.mount: Deactivated successfully. Jan 16 18:00:45.733762 containerd[1571]: time="2026-01-16T18:00:45.733036816Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.32.11\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:00:45.735811 containerd[1571]: time="2026-01-16T18:00:45.735720742Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.32.11: active requests=0, bytes read=0" Jan 16 18:00:45.737809 containerd[1571]: time="2026-01-16T18:00:45.736941814Z" level=info msg="ImageCreate event name:\"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:00:45.739277 containerd[1571]: time="2026-01-16T18:00:45.739221315Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:00:45.740201 containerd[1571]: time="2026-01-16T18:00:45.740162893Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.32.11\" with image id \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\", repo tag \"registry.k8s.io/kube-proxy:v1.32.11\", repo digest \"registry.k8s.io/kube-proxy@sha256:4204f9136c23a867929d32046032fe069b49ad94cf168042405e7d0ec88bdba9\", size \"27557743\" in 1.345599593s" Jan 16 18:00:45.740370 containerd[1571]: time="2026-01-16T18:00:45.740350968Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.32.11\" returns image reference \"sha256:dcdb790dc2bfe6e0b86f702c7f336a38eaef34f6370eb6ff68f4e5b03ed4d425\"" Jan 16 18:00:45.741195 containerd[1571]: time="2026-01-16T18:00:45.741018131Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\"" Jan 16 18:00:46.315553 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 16 18:00:46.318160 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 18:00:46.359434 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1080096375.mount: Deactivated successfully. Jan 16 18:00:46.509522 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 18:00:46.521110 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 16 18:00:46.530314 kernel: audit: type=1130 audit(1768586446.510:283): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:46.510000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:46.552688 (kubelet)[2191]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 16 18:00:46.616572 kubelet[2191]: E0116 18:00:46.616433 2191 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 16 18:00:46.619669 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 16 18:00:46.620000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 18:00:46.620026 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 16 18:00:46.621857 systemd[1]: kubelet.service: Consumed 192ms CPU time, 106.2M memory peak. Jan 16 18:00:46.624730 kernel: audit: type=1131 audit(1768586446.620:284): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 18:00:47.400441 containerd[1571]: time="2026-01-16T18:00:47.400383650Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.11.3\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:00:47.402806 containerd[1571]: time="2026-01-16T18:00:47.402729083Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.11.3: active requests=0, bytes read=15956282" Jan 16 18:00:47.404131 containerd[1571]: time="2026-01-16T18:00:47.403403503Z" level=info msg="ImageCreate event name:\"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:00:47.408643 containerd[1571]: time="2026-01-16T18:00:47.408566352Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:00:47.410207 containerd[1571]: time="2026-01-16T18:00:47.410156102Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.11.3\" with image id \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\", repo tag \"registry.k8s.io/coredns/coredns:v1.11.3\", repo digest \"registry.k8s.io/coredns/coredns@sha256:9caabbf6238b189a65d0d6e6ac138de60d6a1c419e5a341fbbb7c78382559c6e\", size \"16948420\" in 1.668789372s" Jan 16 18:00:47.410355 containerd[1571]: time="2026-01-16T18:00:47.410340143Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.11.3\" returns image reference \"sha256:2f6c962e7b8311337352d9fdea917da2184d9919f4da7695bc2a6517cf392fe4\"" Jan 16 18:00:47.411117 containerd[1571]: time="2026-01-16T18:00:47.410995087Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 16 18:00:47.972226 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2050120200.mount: Deactivated successfully. Jan 16 18:00:47.988178 containerd[1571]: time="2026-01-16T18:00:47.987851860Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 18:00:47.989437 containerd[1571]: time="2026-01-16T18:00:47.989363546Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 16 18:00:47.990863 containerd[1571]: time="2026-01-16T18:00:47.990810366Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 18:00:47.994635 containerd[1571]: time="2026-01-16T18:00:47.994559028Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 16 18:00:47.995859 containerd[1571]: time="2026-01-16T18:00:47.995326949Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 583.998851ms" Jan 16 18:00:47.995859 containerd[1571]: time="2026-01-16T18:00:47.995366020Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 16 18:00:47.996012 containerd[1571]: time="2026-01-16T18:00:47.995975894Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\"" Jan 16 18:00:48.244170 update_engine[1549]: I20260116 18:00:48.243757 1549 update_attempter.cc:509] Updating boot flags... Jan 16 18:00:48.786009 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1035626651.mount: Deactivated successfully. Jan 16 18:00:50.379459 containerd[1571]: time="2026-01-16T18:00:50.379392292Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.16-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:00:50.381670 containerd[1571]: time="2026-01-16T18:00:50.381585197Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.16-0: active requests=0, bytes read=66060366" Jan 16 18:00:50.382987 containerd[1571]: time="2026-01-16T18:00:50.382879696Z" level=info msg="ImageCreate event name:\"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:00:50.388045 containerd[1571]: time="2026-01-16T18:00:50.387945069Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:00:50.389979 containerd[1571]: time="2026-01-16T18:00:50.389670054Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.16-0\" with image id \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\", repo tag \"registry.k8s.io/etcd:3.5.16-0\", repo digest \"registry.k8s.io/etcd@sha256:c6a9d11cc5c04b114ccdef39a9265eeef818e3d02f5359be035ae784097fdec5\", size \"67941650\" in 2.393663686s" Jan 16 18:00:50.389979 containerd[1571]: time="2026-01-16T18:00:50.389724165Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.16-0\" returns image reference \"sha256:7fc9d4aa817aa6a3e549f3cd49d1f7b496407be979fc36dd5f356d59ce8c3a82\"" Jan 16 18:00:55.633000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:55.633559 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 18:00:55.634697 systemd[1]: kubelet.service: Consumed 192ms CPU time, 106.2M memory peak. Jan 16 18:00:55.634000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:55.637740 kernel: audit: type=1130 audit(1768586455.633:285): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:55.637840 kernel: audit: type=1131 audit(1768586455.634:286): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:55.642958 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 18:00:55.674217 systemd[1]: Reload requested from client PID 2346 ('systemctl') (unit session-8.scope)... Jan 16 18:00:55.674240 systemd[1]: Reloading... Jan 16 18:00:55.822660 zram_generator::config[2396]: No configuration found. Jan 16 18:00:56.035716 systemd[1]: Reloading finished in 361 ms. Jan 16 18:00:56.063000 audit: BPF prog-id=61 op=LOAD Jan 16 18:00:56.066786 kernel: audit: type=1334 audit(1768586456.063:287): prog-id=61 op=LOAD Jan 16 18:00:56.066832 kernel: audit: type=1334 audit(1768586456.063:288): prog-id=56 op=UNLOAD Jan 16 18:00:56.066854 kernel: audit: type=1334 audit(1768586456.063:289): prog-id=62 op=LOAD Jan 16 18:00:56.063000 audit: BPF prog-id=56 op=UNLOAD Jan 16 18:00:56.063000 audit: BPF prog-id=62 op=LOAD Jan 16 18:00:56.069053 kernel: audit: type=1334 audit(1768586456.063:290): prog-id=44 op=UNLOAD Jan 16 18:00:56.069158 kernel: audit: type=1334 audit(1768586456.064:291): prog-id=63 op=LOAD Jan 16 18:00:56.069180 kernel: audit: type=1334 audit(1768586456.065:292): prog-id=64 op=LOAD Jan 16 18:00:56.069197 kernel: audit: type=1334 audit(1768586456.065:293): prog-id=45 op=UNLOAD Jan 16 18:00:56.063000 audit: BPF prog-id=44 op=UNLOAD Jan 16 18:00:56.064000 audit: BPF prog-id=63 op=LOAD Jan 16 18:00:56.065000 audit: BPF prog-id=64 op=LOAD Jan 16 18:00:56.065000 audit: BPF prog-id=45 op=UNLOAD Jan 16 18:00:56.070666 kernel: audit: type=1334 audit(1768586456.065:294): prog-id=46 op=UNLOAD Jan 16 18:00:56.065000 audit: BPF prog-id=46 op=UNLOAD Jan 16 18:00:56.068000 audit: BPF prog-id=65 op=LOAD Jan 16 18:00:56.068000 audit: BPF prog-id=66 op=LOAD Jan 16 18:00:56.068000 audit: BPF prog-id=54 op=UNLOAD Jan 16 18:00:56.068000 audit: BPF prog-id=55 op=UNLOAD Jan 16 18:00:56.069000 audit: BPF prog-id=67 op=LOAD Jan 16 18:00:56.069000 audit: BPF prog-id=50 op=UNLOAD Jan 16 18:00:56.070000 audit: BPF prog-id=68 op=LOAD Jan 16 18:00:56.070000 audit: BPF prog-id=47 op=UNLOAD Jan 16 18:00:56.070000 audit: BPF prog-id=69 op=LOAD Jan 16 18:00:56.070000 audit: BPF prog-id=70 op=LOAD Jan 16 18:00:56.070000 audit: BPF prog-id=48 op=UNLOAD Jan 16 18:00:56.070000 audit: BPF prog-id=49 op=UNLOAD Jan 16 18:00:56.071000 audit: BPF prog-id=71 op=LOAD Jan 16 18:00:56.071000 audit: BPF prog-id=41 op=UNLOAD Jan 16 18:00:56.071000 audit: BPF prog-id=72 op=LOAD Jan 16 18:00:56.071000 audit: BPF prog-id=73 op=LOAD Jan 16 18:00:56.071000 audit: BPF prog-id=42 op=UNLOAD Jan 16 18:00:56.071000 audit: BPF prog-id=43 op=UNLOAD Jan 16 18:00:56.078000 audit: BPF prog-id=74 op=LOAD Jan 16 18:00:56.084000 audit: BPF prog-id=57 op=UNLOAD Jan 16 18:00:56.085000 audit: BPF prog-id=75 op=LOAD Jan 16 18:00:56.085000 audit: BPF prog-id=51 op=UNLOAD Jan 16 18:00:56.086000 audit: BPF prog-id=76 op=LOAD Jan 16 18:00:56.086000 audit: BPF prog-id=77 op=LOAD Jan 16 18:00:56.086000 audit: BPF prog-id=52 op=UNLOAD Jan 16 18:00:56.086000 audit: BPF prog-id=53 op=UNLOAD Jan 16 18:00:56.088000 audit: BPF prog-id=78 op=LOAD Jan 16 18:00:56.088000 audit: BPF prog-id=58 op=UNLOAD Jan 16 18:00:56.088000 audit: BPF prog-id=79 op=LOAD Jan 16 18:00:56.088000 audit: BPF prog-id=80 op=LOAD Jan 16 18:00:56.088000 audit: BPF prog-id=59 op=UNLOAD Jan 16 18:00:56.088000 audit: BPF prog-id=60 op=UNLOAD Jan 16 18:00:56.107431 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 16 18:00:56.107751 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 16 18:00:56.108454 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 18:00:56.107000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 16 18:00:56.108721 systemd[1]: kubelet.service: Consumed 126ms CPU time, 95.1M memory peak. Jan 16 18:00:56.112265 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 18:00:56.309065 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 18:00:56.308000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:00:56.323018 (kubelet)[2441]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 18:00:56.373558 kubelet[2441]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 18:00:56.373558 kubelet[2441]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 16 18:00:56.373558 kubelet[2441]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 18:00:56.374027 kubelet[2441]: I0116 18:00:56.373668 2441 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 18:00:57.649532 kubelet[2441]: I0116 18:00:57.647847 2441 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 16 18:00:57.649532 kubelet[2441]: I0116 18:00:57.647885 2441 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 18:00:57.649532 kubelet[2441]: I0116 18:00:57.648419 2441 server.go:954] "Client rotation is on, will bootstrap in background" Jan 16 18:00:57.694259 kubelet[2441]: E0116 18:00:57.694199 2441 certificate_manager.go:562] "Unhandled Error" err="kubernetes.io/kube-apiserver-client-kubelet: Failed while requesting a signed certificate from the control plane: cannot create certificate signing request: Post \"https://167.235.246.183:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 167.235.246.183:6443: connect: connection refused" logger="UnhandledError" Jan 16 18:00:57.696924 kubelet[2441]: I0116 18:00:57.696883 2441 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 18:00:57.706680 kubelet[2441]: I0116 18:00:57.706616 2441 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 16 18:00:57.711107 kubelet[2441]: I0116 18:00:57.711050 2441 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 18:00:57.712519 kubelet[2441]: I0116 18:00:57.712428 2441 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 18:00:57.712879 kubelet[2441]: I0116 18:00:57.712514 2441 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4580-0-0-p-e4bb445d88","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 16 18:00:57.713089 kubelet[2441]: I0116 18:00:57.712970 2441 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 18:00:57.713089 kubelet[2441]: I0116 18:00:57.712988 2441 container_manager_linux.go:304] "Creating device plugin manager" Jan 16 18:00:57.713310 kubelet[2441]: I0116 18:00:57.713262 2441 state_mem.go:36] "Initialized new in-memory state store" Jan 16 18:00:57.719078 kubelet[2441]: I0116 18:00:57.718983 2441 kubelet.go:446] "Attempting to sync node with API server" Jan 16 18:00:57.719078 kubelet[2441]: I0116 18:00:57.719023 2441 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 18:00:57.719078 kubelet[2441]: I0116 18:00:57.719052 2441 kubelet.go:352] "Adding apiserver pod source" Jan 16 18:00:57.719078 kubelet[2441]: I0116 18:00:57.719063 2441 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 18:00:57.724364 kubelet[2441]: W0116 18:00:57.724297 2441 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://167.235.246.183:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 167.235.246.183:6443: connect: connection refused Jan 16 18:00:57.724592 kubelet[2441]: E0116 18:00:57.724564 2441 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://167.235.246.183:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 167.235.246.183:6443: connect: connection refused" logger="UnhandledError" Jan 16 18:00:57.724808 kubelet[2441]: W0116 18:00:57.724770 2441 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Node: Get "https://167.235.246.183:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4580-0-0-p-e4bb445d88&limit=500&resourceVersion=0": dial tcp 167.235.246.183:6443: connect: connection refused Jan 16 18:00:57.724936 kubelet[2441]: E0116 18:00:57.724898 2441 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Node: failed to list *v1.Node: Get \"https://167.235.246.183:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4580-0-0-p-e4bb445d88&limit=500&resourceVersion=0\": dial tcp 167.235.246.183:6443: connect: connection refused" logger="UnhandledError" Jan 16 18:00:57.725136 kubelet[2441]: I0116 18:00:57.725117 2441 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 16 18:00:57.726056 kubelet[2441]: I0116 18:00:57.726026 2441 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 16 18:00:57.726317 kubelet[2441]: W0116 18:00:57.726302 2441 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 16 18:00:57.727845 kubelet[2441]: I0116 18:00:57.727823 2441 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 16 18:00:57.727986 kubelet[2441]: I0116 18:00:57.727975 2441 server.go:1287] "Started kubelet" Jan 16 18:00:57.737233 kubelet[2441]: I0116 18:00:57.737183 2441 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 18:00:57.740468 kubelet[2441]: E0116 18:00:57.740037 2441 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://167.235.246.183:6443/api/v1/namespaces/default/events\": dial tcp 167.235.246.183:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4580-0-0-p-e4bb445d88.188b48034f70c5f7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4580-0-0-p-e4bb445d88,UID:ci-4580-0-0-p-e4bb445d88,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4580-0-0-p-e4bb445d88,},FirstTimestamp:2026-01-16 18:00:57.727944183 +0000 UTC m=+1.400780184,LastTimestamp:2026-01-16 18:00:57.727944183 +0000 UTC m=+1.400780184,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4580-0-0-p-e4bb445d88,}" Jan 16 18:00:57.740000 audit[2452]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2452 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:00:57.740000 audit[2452]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffcfdf2220 a2=0 a3=0 items=0 ppid=2441 pid=2452 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:57.740000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 16 18:00:57.747656 kubelet[2441]: I0116 18:00:57.747116 2441 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 18:00:57.749612 kubelet[2441]: I0116 18:00:57.749570 2441 server.go:479] "Adding debug handlers to kubelet server" Jan 16 18:00:57.751024 kubelet[2441]: I0116 18:00:57.750940 2441 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 16 18:00:57.751363 kubelet[2441]: E0116 18:00:57.751340 2441 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4580-0-0-p-e4bb445d88\" not found" Jan 16 18:00:57.751817 kubelet[2441]: I0116 18:00:57.751755 2441 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 18:00:57.752349 kubelet[2441]: I0116 18:00:57.752330 2441 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 18:00:57.752676 kubelet[2441]: I0116 18:00:57.752656 2441 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 16 18:00:57.751000 audit[2454]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2454 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:00:57.751000 audit[2454]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcef8f790 a2=0 a3=0 items=0 ppid=2441 pid=2454 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:57.751000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 16 18:00:57.754969 kubelet[2441]: I0116 18:00:57.754898 2441 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 16 18:00:57.755040 kubelet[2441]: I0116 18:00:57.755027 2441 reconciler.go:26] "Reconciler: start to sync state" Jan 16 18:00:57.754000 audit[2456]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2456 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:00:57.754000 audit[2456]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=fffffe514920 a2=0 a3=0 items=0 ppid=2441 pid=2456 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:57.754000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 16 18:00:57.759647 kubelet[2441]: E0116 18:00:57.758518 2441 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://167.235.246.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4580-0-0-p-e4bb445d88?timeout=10s\": dial tcp 167.235.246.183:6443: connect: connection refused" interval="200ms" Jan 16 18:00:57.759647 kubelet[2441]: W0116 18:00:57.759152 2441 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://167.235.246.183:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 167.235.246.183:6443: connect: connection refused Jan 16 18:00:57.759647 kubelet[2441]: E0116 18:00:57.759200 2441 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://167.235.246.183:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 167.235.246.183:6443: connect: connection refused" logger="UnhandledError" Jan 16 18:00:57.757000 audit[2458]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2458 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:00:57.757000 audit[2458]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd0f40ea0 a2=0 a3=0 items=0 ppid=2441 pid=2458 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:57.757000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 16 18:00:57.760794 kubelet[2441]: E0116 18:00:57.760769 2441 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 18:00:57.762653 kubelet[2441]: I0116 18:00:57.761786 2441 factory.go:221] Registration of the containerd container factory successfully Jan 16 18:00:57.762653 kubelet[2441]: I0116 18:00:57.761806 2441 factory.go:221] Registration of the systemd container factory successfully Jan 16 18:00:57.762653 kubelet[2441]: I0116 18:00:57.761894 2441 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 18:00:57.768000 audit[2461]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2461 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:00:57.768000 audit[2461]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffe4749a90 a2=0 a3=0 items=0 ppid=2441 pid=2461 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:57.768000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 16 18:00:57.772055 kubelet[2441]: I0116 18:00:57.770468 2441 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 16 18:00:57.770000 audit[2462]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2462 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:00:57.770000 audit[2462]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffd4e329c0 a2=0 a3=0 items=0 ppid=2441 pid=2462 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:57.770000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 16 18:00:57.773668 kubelet[2441]: I0116 18:00:57.773604 2441 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 16 18:00:57.773668 kubelet[2441]: I0116 18:00:57.773658 2441 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 16 18:00:57.773752 kubelet[2441]: I0116 18:00:57.773690 2441 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 16 18:00:57.773752 kubelet[2441]: I0116 18:00:57.773698 2441 kubelet.go:2382] "Starting kubelet main sync loop" Jan 16 18:00:57.773804 kubelet[2441]: E0116 18:00:57.773750 2441 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 18:00:57.773000 audit[2464]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2464 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:00:57.773000 audit[2464]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc57a2760 a2=0 a3=0 items=0 ppid=2441 pid=2464 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:57.773000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 16 18:00:57.774000 audit[2465]: NETFILTER_CFG table=nat:49 family=2 entries=1 op=nft_register_chain pid=2465 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:00:57.774000 audit[2465]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd635d550 a2=0 a3=0 items=0 ppid=2441 pid=2465 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:57.774000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 16 18:00:57.775000 audit[2466]: NETFILTER_CFG table=filter:50 family=2 entries=1 op=nft_register_chain pid=2466 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:00:57.775000 audit[2466]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffc768f90 a2=0 a3=0 items=0 ppid=2441 pid=2466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:57.775000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 16 18:00:57.777000 audit[2467]: NETFILTER_CFG table=mangle:51 family=10 entries=1 op=nft_register_chain pid=2467 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:00:57.777000 audit[2467]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffffb19d780 a2=0 a3=0 items=0 ppid=2441 pid=2467 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:57.777000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 16 18:00:57.778000 audit[2468]: NETFILTER_CFG table=nat:52 family=10 entries=1 op=nft_register_chain pid=2468 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:00:57.778000 audit[2468]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff482e7b0 a2=0 a3=0 items=0 ppid=2441 pid=2468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:57.778000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 16 18:00:57.780000 audit[2469]: NETFILTER_CFG table=filter:53 family=10 entries=1 op=nft_register_chain pid=2469 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:00:57.780000 audit[2469]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffee1b59a0 a2=0 a3=0 items=0 ppid=2441 pid=2469 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:57.780000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 16 18:00:57.785251 kubelet[2441]: W0116 18:00:57.785141 2441 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.RuntimeClass: Get "https://167.235.246.183:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": dial tcp 167.235.246.183:6443: connect: connection refused Jan 16 18:00:57.785451 kubelet[2441]: E0116 18:00:57.785231 2441 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.RuntimeClass: failed to list *v1.RuntimeClass: Get \"https://167.235.246.183:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 167.235.246.183:6443: connect: connection refused" logger="UnhandledError" Jan 16 18:00:57.793436 kubelet[2441]: I0116 18:00:57.793407 2441 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 16 18:00:57.793891 kubelet[2441]: I0116 18:00:57.793673 2441 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 16 18:00:57.793891 kubelet[2441]: I0116 18:00:57.793704 2441 state_mem.go:36] "Initialized new in-memory state store" Jan 16 18:00:57.797000 kubelet[2441]: I0116 18:00:57.796652 2441 policy_none.go:49] "None policy: Start" Jan 16 18:00:57.797000 kubelet[2441]: I0116 18:00:57.796700 2441 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 16 18:00:57.797000 kubelet[2441]: I0116 18:00:57.796719 2441 state_mem.go:35] "Initializing new in-memory state store" Jan 16 18:00:57.803526 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 16 18:00:57.818686 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 16 18:00:57.823913 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 16 18:00:57.834614 kubelet[2441]: I0116 18:00:57.834546 2441 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 16 18:00:57.837082 kubelet[2441]: I0116 18:00:57.837045 2441 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 16 18:00:57.837204 kubelet[2441]: I0116 18:00:57.837082 2441 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 16 18:00:57.837580 kubelet[2441]: I0116 18:00:57.837503 2441 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 18:00:57.839172 kubelet[2441]: E0116 18:00:57.839058 2441 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 16 18:00:57.839172 kubelet[2441]: E0116 18:00:57.839130 2441 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4580-0-0-p-e4bb445d88\" not found" Jan 16 18:00:57.890040 systemd[1]: Created slice kubepods-burstable-pod8c866d1895cbdafa1e19ab254972b915.slice - libcontainer container kubepods-burstable-pod8c866d1895cbdafa1e19ab254972b915.slice. Jan 16 18:00:57.910434 kubelet[2441]: E0116 18:00:57.910131 2441 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4580-0-0-p-e4bb445d88\" not found" node="ci-4580-0-0-p-e4bb445d88" Jan 16 18:00:57.919053 systemd[1]: Created slice kubepods-burstable-pod680c78a69ceec4d014b9ac2c8c9f439d.slice - libcontainer container kubepods-burstable-pod680c78a69ceec4d014b9ac2c8c9f439d.slice. Jan 16 18:00:57.928847 kubelet[2441]: E0116 18:00:57.928689 2441 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4580-0-0-p-e4bb445d88\" not found" node="ci-4580-0-0-p-e4bb445d88" Jan 16 18:00:57.933565 systemd[1]: Created slice kubepods-burstable-podef388b2e3a4a05631c55078f457a2431.slice - libcontainer container kubepods-burstable-podef388b2e3a4a05631c55078f457a2431.slice. Jan 16 18:00:57.936213 kubelet[2441]: E0116 18:00:57.936165 2441 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4580-0-0-p-e4bb445d88\" not found" node="ci-4580-0-0-p-e4bb445d88" Jan 16 18:00:57.940992 kubelet[2441]: I0116 18:00:57.940349 2441 kubelet_node_status.go:75] "Attempting to register node" node="ci-4580-0-0-p-e4bb445d88" Jan 16 18:00:57.941233 kubelet[2441]: E0116 18:00:57.941142 2441 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://167.235.246.183:6443/api/v1/nodes\": dial tcp 167.235.246.183:6443: connect: connection refused" node="ci-4580-0-0-p-e4bb445d88" Jan 16 18:00:57.956514 kubelet[2441]: I0116 18:00:57.956020 2441 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ef388b2e3a4a05631c55078f457a2431-ca-certs\") pod \"kube-controller-manager-ci-4580-0-0-p-e4bb445d88\" (UID: \"ef388b2e3a4a05631c55078f457a2431\") " pod="kube-system/kube-controller-manager-ci-4580-0-0-p-e4bb445d88" Jan 16 18:00:57.956514 kubelet[2441]: I0116 18:00:57.956203 2441 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ef388b2e3a4a05631c55078f457a2431-flexvolume-dir\") pod \"kube-controller-manager-ci-4580-0-0-p-e4bb445d88\" (UID: \"ef388b2e3a4a05631c55078f457a2431\") " pod="kube-system/kube-controller-manager-ci-4580-0-0-p-e4bb445d88" Jan 16 18:00:57.956514 kubelet[2441]: I0116 18:00:57.956263 2441 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ef388b2e3a4a05631c55078f457a2431-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4580-0-0-p-e4bb445d88\" (UID: \"ef388b2e3a4a05631c55078f457a2431\") " pod="kube-system/kube-controller-manager-ci-4580-0-0-p-e4bb445d88" Jan 16 18:00:57.956514 kubelet[2441]: I0116 18:00:57.956300 2441 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c866d1895cbdafa1e19ab254972b915-kubeconfig\") pod \"kube-scheduler-ci-4580-0-0-p-e4bb445d88\" (UID: \"8c866d1895cbdafa1e19ab254972b915\") " pod="kube-system/kube-scheduler-ci-4580-0-0-p-e4bb445d88" Jan 16 18:00:57.956514 kubelet[2441]: I0116 18:00:57.956331 2441 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/680c78a69ceec4d014b9ac2c8c9f439d-ca-certs\") pod \"kube-apiserver-ci-4580-0-0-p-e4bb445d88\" (UID: \"680c78a69ceec4d014b9ac2c8c9f439d\") " pod="kube-system/kube-apiserver-ci-4580-0-0-p-e4bb445d88" Jan 16 18:00:57.957013 kubelet[2441]: I0116 18:00:57.956361 2441 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/680c78a69ceec4d014b9ac2c8c9f439d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4580-0-0-p-e4bb445d88\" (UID: \"680c78a69ceec4d014b9ac2c8c9f439d\") " pod="kube-system/kube-apiserver-ci-4580-0-0-p-e4bb445d88" Jan 16 18:00:57.957013 kubelet[2441]: I0116 18:00:57.956387 2441 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/680c78a69ceec4d014b9ac2c8c9f439d-k8s-certs\") pod \"kube-apiserver-ci-4580-0-0-p-e4bb445d88\" (UID: \"680c78a69ceec4d014b9ac2c8c9f439d\") " pod="kube-system/kube-apiserver-ci-4580-0-0-p-e4bb445d88" Jan 16 18:00:57.957013 kubelet[2441]: I0116 18:00:57.956414 2441 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ef388b2e3a4a05631c55078f457a2431-k8s-certs\") pod \"kube-controller-manager-ci-4580-0-0-p-e4bb445d88\" (UID: \"ef388b2e3a4a05631c55078f457a2431\") " pod="kube-system/kube-controller-manager-ci-4580-0-0-p-e4bb445d88" Jan 16 18:00:57.957013 kubelet[2441]: I0116 18:00:57.956461 2441 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef388b2e3a4a05631c55078f457a2431-kubeconfig\") pod \"kube-controller-manager-ci-4580-0-0-p-e4bb445d88\" (UID: \"ef388b2e3a4a05631c55078f457a2431\") " pod="kube-system/kube-controller-manager-ci-4580-0-0-p-e4bb445d88" Jan 16 18:00:57.959333 kubelet[2441]: E0116 18:00:57.959275 2441 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://167.235.246.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4580-0-0-p-e4bb445d88?timeout=10s\": dial tcp 167.235.246.183:6443: connect: connection refused" interval="400ms" Jan 16 18:00:58.143820 kubelet[2441]: I0116 18:00:58.143715 2441 kubelet_node_status.go:75] "Attempting to register node" node="ci-4580-0-0-p-e4bb445d88" Jan 16 18:00:58.144312 kubelet[2441]: E0116 18:00:58.144252 2441 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://167.235.246.183:6443/api/v1/nodes\": dial tcp 167.235.246.183:6443: connect: connection refused" node="ci-4580-0-0-p-e4bb445d88" Jan 16 18:00:58.213729 containerd[1571]: time="2026-01-16T18:00:58.213550139Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4580-0-0-p-e4bb445d88,Uid:8c866d1895cbdafa1e19ab254972b915,Namespace:kube-system,Attempt:0,}" Jan 16 18:00:58.235266 containerd[1571]: time="2026-01-16T18:00:58.234919998Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4580-0-0-p-e4bb445d88,Uid:680c78a69ceec4d014b9ac2c8c9f439d,Namespace:kube-system,Attempt:0,}" Jan 16 18:00:58.239712 containerd[1571]: time="2026-01-16T18:00:58.239468014Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4580-0-0-p-e4bb445d88,Uid:ef388b2e3a4a05631c55078f457a2431,Namespace:kube-system,Attempt:0,}" Jan 16 18:00:58.247317 containerd[1571]: time="2026-01-16T18:00:58.247270178Z" level=info msg="connecting to shim 832b2b2d5ceb3be53fda973404c5c805dcd8452be743d00d12c0596554813e10" address="unix:///run/containerd/s/1c62d78e0d09c0020cf321c81a4297c0dba149b92c687b32fd894980653b37ce" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:00:58.285641 containerd[1571]: time="2026-01-16T18:00:58.285249862Z" level=info msg="connecting to shim 83db1f42a8dfc2de052617c35f02bf9900c77e90f1c52d4e62fe70849e5ce580" address="unix:///run/containerd/s/caeda84c22ae067ebbb90f4d5406a81abd9f59705b65e975f2299f19ea3d7659" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:00:58.291926 systemd[1]: Started cri-containerd-832b2b2d5ceb3be53fda973404c5c805dcd8452be743d00d12c0596554813e10.scope - libcontainer container 832b2b2d5ceb3be53fda973404c5c805dcd8452be743d00d12c0596554813e10. Jan 16 18:00:58.308452 containerd[1571]: time="2026-01-16T18:00:58.308410458Z" level=info msg="connecting to shim 9eb4300b7424ce7716594c0d36c4feb43f463c6363a5a9e8b476c76d32fc3f3a" address="unix:///run/containerd/s/1537856525570f4b64e12a34e6fb8d8af8cb875ddb974ae535e6a60061e610a3" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:00:58.332961 systemd[1]: Started cri-containerd-83db1f42a8dfc2de052617c35f02bf9900c77e90f1c52d4e62fe70849e5ce580.scope - libcontainer container 83db1f42a8dfc2de052617c35f02bf9900c77e90f1c52d4e62fe70849e5ce580. Jan 16 18:00:58.335000 audit: BPF prog-id=81 op=LOAD Jan 16 18:00:58.335000 audit: BPF prog-id=82 op=LOAD Jan 16 18:00:58.335000 audit[2493]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2480 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.335000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833326232623264356365623362653533666461393733343034633563 Jan 16 18:00:58.338000 audit: BPF prog-id=82 op=UNLOAD Jan 16 18:00:58.338000 audit[2493]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2480 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.338000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833326232623264356365623362653533666461393733343034633563 Jan 16 18:00:58.341000 audit: BPF prog-id=83 op=LOAD Jan 16 18:00:58.341000 audit[2493]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2480 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833326232623264356365623362653533666461393733343034633563 Jan 16 18:00:58.341000 audit: BPF prog-id=84 op=LOAD Jan 16 18:00:58.341000 audit[2493]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2480 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.341000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833326232623264356365623362653533666461393733343034633563 Jan 16 18:00:58.342000 audit: BPF prog-id=84 op=UNLOAD Jan 16 18:00:58.342000 audit[2493]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2480 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.342000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833326232623264356365623362653533666461393733343034633563 Jan 16 18:00:58.342000 audit: BPF prog-id=83 op=UNLOAD Jan 16 18:00:58.342000 audit[2493]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2480 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.342000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833326232623264356365623362653533666461393733343034633563 Jan 16 18:00:58.342000 audit: BPF prog-id=85 op=LOAD Jan 16 18:00:58.342000 audit[2493]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2480 pid=2493 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.342000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833326232623264356365623362653533666461393733343034633563 Jan 16 18:00:58.356921 systemd[1]: Started cri-containerd-9eb4300b7424ce7716594c0d36c4feb43f463c6363a5a9e8b476c76d32fc3f3a.scope - libcontainer container 9eb4300b7424ce7716594c0d36c4feb43f463c6363a5a9e8b476c76d32fc3f3a. Jan 16 18:00:58.359000 audit: BPF prog-id=86 op=LOAD Jan 16 18:00:58.362228 kubelet[2441]: E0116 18:00:58.362185 2441 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://167.235.246.183:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4580-0-0-p-e4bb445d88?timeout=10s\": dial tcp 167.235.246.183:6443: connect: connection refused" interval="800ms" Jan 16 18:00:58.361000 audit: BPF prog-id=87 op=LOAD Jan 16 18:00:58.361000 audit[2524]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000174180 a2=98 a3=0 items=0 ppid=2514 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833646231663432613864666332646530353236313763333566303262 Jan 16 18:00:58.361000 audit: BPF prog-id=87 op=UNLOAD Jan 16 18:00:58.361000 audit[2524]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2514 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.361000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833646231663432613864666332646530353236313763333566303262 Jan 16 18:00:58.362000 audit: BPF prog-id=88 op=LOAD Jan 16 18:00:58.362000 audit[2524]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001743e8 a2=98 a3=0 items=0 ppid=2514 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.362000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833646231663432613864666332646530353236313763333566303262 Jan 16 18:00:58.362000 audit: BPF prog-id=89 op=LOAD Jan 16 18:00:58.362000 audit[2524]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000174168 a2=98 a3=0 items=0 ppid=2514 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.362000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833646231663432613864666332646530353236313763333566303262 Jan 16 18:00:58.362000 audit: BPF prog-id=89 op=UNLOAD Jan 16 18:00:58.362000 audit[2524]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2514 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.362000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833646231663432613864666332646530353236313763333566303262 Jan 16 18:00:58.363000 audit: BPF prog-id=88 op=UNLOAD Jan 16 18:00:58.363000 audit[2524]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2514 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.363000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833646231663432613864666332646530353236313763333566303262 Jan 16 18:00:58.363000 audit: BPF prog-id=90 op=LOAD Jan 16 18:00:58.363000 audit[2524]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000174648 a2=98 a3=0 items=0 ppid=2514 pid=2524 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.363000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3833646231663432613864666332646530353236313763333566303262 Jan 16 18:00:58.391000 audit: BPF prog-id=91 op=LOAD Jan 16 18:00:58.391000 audit: BPF prog-id=92 op=LOAD Jan 16 18:00:58.391000 audit[2563]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2534 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.391000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965623433303062373432346365373731363539346330643336633466 Jan 16 18:00:58.392000 audit: BPF prog-id=92 op=UNLOAD Jan 16 18:00:58.392000 audit[2563]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2534 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.392000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965623433303062373432346365373731363539346330643336633466 Jan 16 18:00:58.392000 audit: BPF prog-id=93 op=LOAD Jan 16 18:00:58.392000 audit[2563]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2534 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.392000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965623433303062373432346365373731363539346330643336633466 Jan 16 18:00:58.393000 audit: BPF prog-id=94 op=LOAD Jan 16 18:00:58.393000 audit[2563]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2534 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.393000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965623433303062373432346365373731363539346330643336633466 Jan 16 18:00:58.393000 audit: BPF prog-id=94 op=UNLOAD Jan 16 18:00:58.393000 audit[2563]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2534 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.393000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965623433303062373432346365373731363539346330643336633466 Jan 16 18:00:58.393000 audit: BPF prog-id=93 op=UNLOAD Jan 16 18:00:58.393000 audit[2563]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2534 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.393000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965623433303062373432346365373731363539346330643336633466 Jan 16 18:00:58.393000 audit: BPF prog-id=95 op=LOAD Jan 16 18:00:58.393000 audit[2563]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2534 pid=2563 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.393000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3965623433303062373432346365373731363539346330643336633466 Jan 16 18:00:58.406198 containerd[1571]: time="2026-01-16T18:00:58.405987940Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4580-0-0-p-e4bb445d88,Uid:8c866d1895cbdafa1e19ab254972b915,Namespace:kube-system,Attempt:0,} returns sandbox id \"832b2b2d5ceb3be53fda973404c5c805dcd8452be743d00d12c0596554813e10\"" Jan 16 18:00:58.413801 containerd[1571]: time="2026-01-16T18:00:58.413721270Z" level=info msg="CreateContainer within sandbox \"832b2b2d5ceb3be53fda973404c5c805dcd8452be743d00d12c0596554813e10\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 16 18:00:58.433656 containerd[1571]: time="2026-01-16T18:00:58.433577644Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4580-0-0-p-e4bb445d88,Uid:ef388b2e3a4a05631c55078f457a2431,Namespace:kube-system,Attempt:0,} returns sandbox id \"83db1f42a8dfc2de052617c35f02bf9900c77e90f1c52d4e62fe70849e5ce580\"" Jan 16 18:00:58.435592 containerd[1571]: time="2026-01-16T18:00:58.435112367Z" level=info msg="Container d1de5aad5d010c71bc002602c4073f33ecfdb5af8278e4c006d0634cb8665169: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:00:58.437765 containerd[1571]: time="2026-01-16T18:00:58.437732420Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4580-0-0-p-e4bb445d88,Uid:680c78a69ceec4d014b9ac2c8c9f439d,Namespace:kube-system,Attempt:0,} returns sandbox id \"9eb4300b7424ce7716594c0d36c4feb43f463c6363a5a9e8b476c76d32fc3f3a\"" Jan 16 18:00:58.438722 containerd[1571]: time="2026-01-16T18:00:58.438696962Z" level=info msg="CreateContainer within sandbox \"83db1f42a8dfc2de052617c35f02bf9900c77e90f1c52d4e62fe70849e5ce580\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 16 18:00:58.441188 containerd[1571]: time="2026-01-16T18:00:58.441155511Z" level=info msg="CreateContainer within sandbox \"9eb4300b7424ce7716594c0d36c4feb43f463c6363a5a9e8b476c76d32fc3f3a\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 16 18:00:58.447413 containerd[1571]: time="2026-01-16T18:00:58.446794175Z" level=info msg="CreateContainer within sandbox \"832b2b2d5ceb3be53fda973404c5c805dcd8452be743d00d12c0596554813e10\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"d1de5aad5d010c71bc002602c4073f33ecfdb5af8278e4c006d0634cb8665169\"" Jan 16 18:00:58.452178 containerd[1571]: time="2026-01-16T18:00:58.452132630Z" level=info msg="Container 23ac55dd89b59b8804f2db7b4873636195680bf606dba0881d7443c069f3596d: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:00:58.463006 containerd[1571]: time="2026-01-16T18:00:58.462959965Z" level=info msg="Container 617a81f864af3374c066117a3113770705ca6302078555631cfa1a9ffc67f940: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:00:58.470091 containerd[1571]: time="2026-01-16T18:00:58.469898657Z" level=info msg="StartContainer for \"d1de5aad5d010c71bc002602c4073f33ecfdb5af8278e4c006d0634cb8665169\"" Jan 16 18:00:58.473170 containerd[1571]: time="2026-01-16T18:00:58.473127608Z" level=info msg="connecting to shim d1de5aad5d010c71bc002602c4073f33ecfdb5af8278e4c006d0634cb8665169" address="unix:///run/containerd/s/1c62d78e0d09c0020cf321c81a4297c0dba149b92c687b32fd894980653b37ce" protocol=ttrpc version=3 Jan 16 18:00:58.483376 containerd[1571]: time="2026-01-16T18:00:58.483286091Z" level=info msg="CreateContainer within sandbox \"9eb4300b7424ce7716594c0d36c4feb43f463c6363a5a9e8b476c76d32fc3f3a\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"23ac55dd89b59b8804f2db7b4873636195680bf606dba0881d7443c069f3596d\"" Jan 16 18:00:58.484450 containerd[1571]: time="2026-01-16T18:00:58.484411576Z" level=info msg="StartContainer for \"23ac55dd89b59b8804f2db7b4873636195680bf606dba0881d7443c069f3596d\"" Jan 16 18:00:58.487644 containerd[1571]: time="2026-01-16T18:00:58.487167775Z" level=info msg="CreateContainer within sandbox \"83db1f42a8dfc2de052617c35f02bf9900c77e90f1c52d4e62fe70849e5ce580\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"617a81f864af3374c066117a3113770705ca6302078555631cfa1a9ffc67f940\"" Jan 16 18:00:58.488329 containerd[1571]: time="2026-01-16T18:00:58.488283181Z" level=info msg="StartContainer for \"617a81f864af3374c066117a3113770705ca6302078555631cfa1a9ffc67f940\"" Jan 16 18:00:58.489441 containerd[1571]: time="2026-01-16T18:00:58.489405306Z" level=info msg="connecting to shim 23ac55dd89b59b8804f2db7b4873636195680bf606dba0881d7443c069f3596d" address="unix:///run/containerd/s/1537856525570f4b64e12a34e6fb8d8af8cb875ddb974ae535e6a60061e610a3" protocol=ttrpc version=3 Jan 16 18:00:58.491219 containerd[1571]: time="2026-01-16T18:00:58.491176246Z" level=info msg="connecting to shim 617a81f864af3374c066117a3113770705ca6302078555631cfa1a9ffc67f940" address="unix:///run/containerd/s/caeda84c22ae067ebbb90f4d5406a81abd9f59705b65e975f2299f19ea3d7659" protocol=ttrpc version=3 Jan 16 18:00:58.507967 systemd[1]: Started cri-containerd-d1de5aad5d010c71bc002602c4073f33ecfdb5af8278e4c006d0634cb8665169.scope - libcontainer container d1de5aad5d010c71bc002602c4073f33ecfdb5af8278e4c006d0634cb8665169. Jan 16 18:00:58.530954 systemd[1]: Started cri-containerd-23ac55dd89b59b8804f2db7b4873636195680bf606dba0881d7443c069f3596d.scope - libcontainer container 23ac55dd89b59b8804f2db7b4873636195680bf606dba0881d7443c069f3596d. Jan 16 18:00:58.534000 audit: BPF prog-id=96 op=LOAD Jan 16 18:00:58.535000 audit: BPF prog-id=97 op=LOAD Jan 16 18:00:58.535000 audit[2611]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2480 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.535000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431646535616164356430313063373162633030323630326334303733 Jan 16 18:00:58.535000 audit: BPF prog-id=97 op=UNLOAD Jan 16 18:00:58.535000 audit[2611]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2480 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.535000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431646535616164356430313063373162633030323630326334303733 Jan 16 18:00:58.536000 audit: BPF prog-id=98 op=LOAD Jan 16 18:00:58.536000 audit[2611]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2480 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.536000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431646535616164356430313063373162633030323630326334303733 Jan 16 18:00:58.536000 audit: BPF prog-id=99 op=LOAD Jan 16 18:00:58.536000 audit[2611]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=2480 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.536000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431646535616164356430313063373162633030323630326334303733 Jan 16 18:00:58.537000 audit: BPF prog-id=99 op=UNLOAD Jan 16 18:00:58.537000 audit[2611]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2480 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431646535616164356430313063373162633030323630326334303733 Jan 16 18:00:58.537000 audit: BPF prog-id=98 op=UNLOAD Jan 16 18:00:58.537000 audit[2611]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2480 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431646535616164356430313063373162633030323630326334303733 Jan 16 18:00:58.537000 audit: BPF prog-id=100 op=LOAD Jan 16 18:00:58.537000 audit[2611]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=2480 pid=2611 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6431646535616164356430313063373162633030323630326334303733 Jan 16 18:00:58.541928 systemd[1]: Started cri-containerd-617a81f864af3374c066117a3113770705ca6302078555631cfa1a9ffc67f940.scope - libcontainer container 617a81f864af3374c066117a3113770705ca6302078555631cfa1a9ffc67f940. Jan 16 18:00:58.548803 kubelet[2441]: I0116 18:00:58.548634 2441 kubelet_node_status.go:75] "Attempting to register node" node="ci-4580-0-0-p-e4bb445d88" Jan 16 18:00:58.549878 kubelet[2441]: E0116 18:00:58.549775 2441 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://167.235.246.183:6443/api/v1/nodes\": dial tcp 167.235.246.183:6443: connect: connection refused" node="ci-4580-0-0-p-e4bb445d88" Jan 16 18:00:58.576000 audit: BPF prog-id=101 op=LOAD Jan 16 18:00:58.576000 audit: BPF prog-id=102 op=LOAD Jan 16 18:00:58.576000 audit[2621]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2514 pid=2621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.576000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631376138316638363461663333373463303636313137613331313337 Jan 16 18:00:58.578000 audit: BPF prog-id=102 op=UNLOAD Jan 16 18:00:58.578000 audit[2621]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2514 pid=2621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.578000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631376138316638363461663333373463303636313137613331313337 Jan 16 18:00:58.578000 audit: BPF prog-id=103 op=LOAD Jan 16 18:00:58.578000 audit[2621]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2514 pid=2621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.578000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631376138316638363461663333373463303636313137613331313337 Jan 16 18:00:58.579000 audit: BPF prog-id=104 op=LOAD Jan 16 18:00:58.579000 audit[2621]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2514 pid=2621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631376138316638363461663333373463303636313137613331313337 Jan 16 18:00:58.579000 audit: BPF prog-id=104 op=UNLOAD Jan 16 18:00:58.579000 audit[2621]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2514 pid=2621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631376138316638363461663333373463303636313137613331313337 Jan 16 18:00:58.579000 audit: BPF prog-id=103 op=UNLOAD Jan 16 18:00:58.579000 audit[2621]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2514 pid=2621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631376138316638363461663333373463303636313137613331313337 Jan 16 18:00:58.579000 audit: BPF prog-id=105 op=LOAD Jan 16 18:00:58.579000 audit[2621]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2514 pid=2621 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.579000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3631376138316638363461663333373463303636313137613331313337 Jan 16 18:00:58.582000 audit: BPF prog-id=106 op=LOAD Jan 16 18:00:58.584000 audit: BPF prog-id=107 op=LOAD Jan 16 18:00:58.584000 audit[2622]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2534 pid=2622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.584000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233616335356464383962353962383830346632646237623438373336 Jan 16 18:00:58.585000 audit: BPF prog-id=107 op=UNLOAD Jan 16 18:00:58.585000 audit[2622]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2534 pid=2622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233616335356464383962353962383830346632646237623438373336 Jan 16 18:00:58.585000 audit: BPF prog-id=108 op=LOAD Jan 16 18:00:58.585000 audit[2622]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2534 pid=2622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233616335356464383962353962383830346632646237623438373336 Jan 16 18:00:58.585000 audit: BPF prog-id=109 op=LOAD Jan 16 18:00:58.585000 audit[2622]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2534 pid=2622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233616335356464383962353962383830346632646237623438373336 Jan 16 18:00:58.585000 audit: BPF prog-id=109 op=UNLOAD Jan 16 18:00:58.585000 audit[2622]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2534 pid=2622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233616335356464383962353962383830346632646237623438373336 Jan 16 18:00:58.585000 audit: BPF prog-id=108 op=UNLOAD Jan 16 18:00:58.585000 audit[2622]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2534 pid=2622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233616335356464383962353962383830346632646237623438373336 Jan 16 18:00:58.585000 audit: BPF prog-id=110 op=LOAD Jan 16 18:00:58.585000 audit[2622]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2534 pid=2622 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:00:58.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3233616335356464383962353962383830346632646237623438373336 Jan 16 18:00:58.603885 kubelet[2441]: W0116 18:00:58.603678 2441 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.Service: Get "https://167.235.246.183:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": dial tcp 167.235.246.183:6443: connect: connection refused Jan 16 18:00:58.604416 kubelet[2441]: E0116 18:00:58.603765 2441 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.Service: failed to list *v1.Service: Get \"https://167.235.246.183:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 167.235.246.183:6443: connect: connection refused" logger="UnhandledError" Jan 16 18:00:58.606996 containerd[1571]: time="2026-01-16T18:00:58.606948751Z" level=info msg="StartContainer for \"d1de5aad5d010c71bc002602c4073f33ecfdb5af8278e4c006d0634cb8665169\" returns successfully" Jan 16 18:00:58.641651 kubelet[2441]: W0116 18:00:58.641371 2441 reflector.go:569] k8s.io/client-go/informers/factory.go:160: failed to list *v1.CSIDriver: Get "https://167.235.246.183:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0": dial tcp 167.235.246.183:6443: connect: connection refused Jan 16 18:00:58.643528 kubelet[2441]: E0116 18:00:58.643401 2441 reflector.go:166] "Unhandled Error" err="k8s.io/client-go/informers/factory.go:160: Failed to watch *v1.CSIDriver: failed to list *v1.CSIDriver: Get \"https://167.235.246.183:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 167.235.246.183:6443: connect: connection refused" logger="UnhandledError" Jan 16 18:00:58.661523 containerd[1571]: time="2026-01-16T18:00:58.660950639Z" level=info msg="StartContainer for \"617a81f864af3374c066117a3113770705ca6302078555631cfa1a9ffc67f940\" returns successfully" Jan 16 18:00:58.668828 containerd[1571]: time="2026-01-16T18:00:58.668393400Z" level=info msg="StartContainer for \"23ac55dd89b59b8804f2db7b4873636195680bf606dba0881d7443c069f3596d\" returns successfully" Jan 16 18:00:58.801431 kubelet[2441]: E0116 18:00:58.801292 2441 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4580-0-0-p-e4bb445d88\" not found" node="ci-4580-0-0-p-e4bb445d88" Jan 16 18:00:58.810930 kubelet[2441]: E0116 18:00:58.810830 2441 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4580-0-0-p-e4bb445d88\" not found" node="ci-4580-0-0-p-e4bb445d88" Jan 16 18:00:58.811644 kubelet[2441]: E0116 18:00:58.811239 2441 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4580-0-0-p-e4bb445d88\" not found" node="ci-4580-0-0-p-e4bb445d88" Jan 16 18:00:59.354873 kubelet[2441]: I0116 18:00:59.354530 2441 kubelet_node_status.go:75] "Attempting to register node" node="ci-4580-0-0-p-e4bb445d88" Jan 16 18:00:59.812107 kubelet[2441]: E0116 18:00:59.812009 2441 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4580-0-0-p-e4bb445d88\" not found" node="ci-4580-0-0-p-e4bb445d88" Jan 16 18:00:59.813105 kubelet[2441]: E0116 18:00:59.812663 2441 kubelet.go:3190] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4580-0-0-p-e4bb445d88\" not found" node="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:01.261976 kubelet[2441]: E0116 18:01:01.261917 2441 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4580-0-0-p-e4bb445d88\" not found" node="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:01.382564 kubelet[2441]: I0116 18:01:01.382499 2441 kubelet_node_status.go:78] "Successfully registered node" node="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:01.417643 kubelet[2441]: E0116 18:01:01.417377 2441 event.go:359] "Server rejected event (will not retry!)" err="namespaces \"default\" not found" event="&Event{ObjectMeta:{ci-4580-0-0-p-e4bb445d88.188b48034f70c5f7 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4580-0-0-p-e4bb445d88,UID:ci-4580-0-0-p-e4bb445d88,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4580-0-0-p-e4bb445d88,},FirstTimestamp:2026-01-16 18:00:57.727944183 +0000 UTC m=+1.400780184,LastTimestamp:2026-01-16 18:00:57.727944183 +0000 UTC m=+1.400780184,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4580-0-0-p-e4bb445d88,}" Jan 16 18:01:01.452391 kubelet[2441]: I0116 18:01:01.452053 2441 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:01.470038 kubelet[2441]: E0116 18:01:01.469999 2441 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4580-0-0-p-e4bb445d88\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:01.470358 kubelet[2441]: I0116 18:01:01.470210 2441 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:01.473557 kubelet[2441]: E0116 18:01:01.473281 2441 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4580-0-0-p-e4bb445d88\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:01.473557 kubelet[2441]: I0116 18:01:01.473331 2441 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:01.476314 kubelet[2441]: E0116 18:01:01.476275 2441 kubelet.go:3196] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4580-0-0-p-e4bb445d88\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:01.724378 kubelet[2441]: I0116 18:01:01.724313 2441 apiserver.go:52] "Watching apiserver" Jan 16 18:01:01.755110 kubelet[2441]: I0116 18:01:01.755046 2441 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 16 18:01:03.781020 systemd[1]: Reload requested from client PID 2704 ('systemctl') (unit session-8.scope)... Jan 16 18:01:03.781411 systemd[1]: Reloading... Jan 16 18:01:03.911737 zram_generator::config[2766]: No configuration found. Jan 16 18:01:04.113803 systemd[1]: Reloading finished in 331 ms. Jan 16 18:01:04.149466 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 18:01:04.164872 systemd[1]: kubelet.service: Deactivated successfully. Jan 16 18:01:04.170726 kernel: kauditd_printk_skb: 202 callbacks suppressed Jan 16 18:01:04.170842 kernel: audit: type=1131 audit(1768586464.166:389): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:04.166000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:04.166793 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 18:01:04.170264 systemd[1]: kubelet.service: Consumed 1.933s CPU time, 128M memory peak. Jan 16 18:01:04.174310 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 16 18:01:04.176000 audit: BPF prog-id=111 op=LOAD Jan 16 18:01:04.176000 audit: BPF prog-id=78 op=UNLOAD Jan 16 18:01:04.179862 kernel: audit: type=1334 audit(1768586464.176:390): prog-id=111 op=LOAD Jan 16 18:01:04.179923 kernel: audit: type=1334 audit(1768586464.176:391): prog-id=78 op=UNLOAD Jan 16 18:01:04.179949 kernel: audit: type=1334 audit(1768586464.176:392): prog-id=112 op=LOAD Jan 16 18:01:04.176000 audit: BPF prog-id=112 op=LOAD Jan 16 18:01:04.176000 audit: BPF prog-id=113 op=LOAD Jan 16 18:01:04.180762 kernel: audit: type=1334 audit(1768586464.176:393): prog-id=113 op=LOAD Jan 16 18:01:04.180810 kernel: audit: type=1334 audit(1768586464.176:394): prog-id=79 op=UNLOAD Jan 16 18:01:04.176000 audit: BPF prog-id=79 op=UNLOAD Jan 16 18:01:04.176000 audit: BPF prog-id=80 op=UNLOAD Jan 16 18:01:04.185646 kernel: audit: type=1334 audit(1768586464.176:395): prog-id=80 op=UNLOAD Jan 16 18:01:04.185738 kernel: audit: type=1334 audit(1768586464.177:396): prog-id=114 op=LOAD Jan 16 18:01:04.185760 kernel: audit: type=1334 audit(1768586464.177:397): prog-id=62 op=UNLOAD Jan 16 18:01:04.185790 kernel: audit: type=1334 audit(1768586464.177:398): prog-id=115 op=LOAD Jan 16 18:01:04.177000 audit: BPF prog-id=114 op=LOAD Jan 16 18:01:04.177000 audit: BPF prog-id=62 op=UNLOAD Jan 16 18:01:04.177000 audit: BPF prog-id=115 op=LOAD Jan 16 18:01:04.177000 audit: BPF prog-id=116 op=LOAD Jan 16 18:01:04.177000 audit: BPF prog-id=63 op=UNLOAD Jan 16 18:01:04.177000 audit: BPF prog-id=64 op=UNLOAD Jan 16 18:01:04.178000 audit: BPF prog-id=117 op=LOAD Jan 16 18:01:04.178000 audit: BPF prog-id=61 op=UNLOAD Jan 16 18:01:04.183000 audit: BPF prog-id=118 op=LOAD Jan 16 18:01:04.183000 audit: BPF prog-id=74 op=UNLOAD Jan 16 18:01:04.183000 audit: BPF prog-id=119 op=LOAD Jan 16 18:01:04.183000 audit: BPF prog-id=75 op=UNLOAD Jan 16 18:01:04.184000 audit: BPF prog-id=120 op=LOAD Jan 16 18:01:04.184000 audit: BPF prog-id=121 op=LOAD Jan 16 18:01:04.184000 audit: BPF prog-id=76 op=UNLOAD Jan 16 18:01:04.184000 audit: BPF prog-id=77 op=UNLOAD Jan 16 18:01:04.184000 audit: BPF prog-id=122 op=LOAD Jan 16 18:01:04.184000 audit: BPF prog-id=67 op=UNLOAD Jan 16 18:01:04.186000 audit: BPF prog-id=123 op=LOAD Jan 16 18:01:04.188000 audit: BPF prog-id=124 op=LOAD Jan 16 18:01:04.188000 audit: BPF prog-id=65 op=UNLOAD Jan 16 18:01:04.188000 audit: BPF prog-id=66 op=UNLOAD Jan 16 18:01:04.189000 audit: BPF prog-id=125 op=LOAD Jan 16 18:01:04.189000 audit: BPF prog-id=68 op=UNLOAD Jan 16 18:01:04.189000 audit: BPF prog-id=126 op=LOAD Jan 16 18:01:04.189000 audit: BPF prog-id=127 op=LOAD Jan 16 18:01:04.189000 audit: BPF prog-id=69 op=UNLOAD Jan 16 18:01:04.189000 audit: BPF prog-id=70 op=UNLOAD Jan 16 18:01:04.191000 audit: BPF prog-id=128 op=LOAD Jan 16 18:01:04.191000 audit: BPF prog-id=71 op=UNLOAD Jan 16 18:01:04.191000 audit: BPF prog-id=129 op=LOAD Jan 16 18:01:04.191000 audit: BPF prog-id=130 op=LOAD Jan 16 18:01:04.191000 audit: BPF prog-id=72 op=UNLOAD Jan 16 18:01:04.191000 audit: BPF prog-id=73 op=UNLOAD Jan 16 18:01:04.347980 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 16 18:01:04.347000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:04.374193 (kubelet)[2796]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 16 18:01:04.428645 kubelet[2796]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 18:01:04.428645 kubelet[2796]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 16 18:01:04.428645 kubelet[2796]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 16 18:01:04.428645 kubelet[2796]: I0116 18:01:04.427763 2796 server.go:215] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 16 18:01:04.439664 kubelet[2796]: I0116 18:01:04.439283 2796 server.go:520] "Kubelet version" kubeletVersion="v1.32.4" Jan 16 18:01:04.439664 kubelet[2796]: I0116 18:01:04.439332 2796 server.go:522] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 16 18:01:04.439925 kubelet[2796]: I0116 18:01:04.439890 2796 server.go:954] "Client rotation is on, will bootstrap in background" Jan 16 18:01:04.444590 kubelet[2796]: I0116 18:01:04.443452 2796 certificate_store.go:130] Loading cert/key pair from "/var/lib/kubelet/pki/kubelet-client-current.pem". Jan 16 18:01:04.448064 kubelet[2796]: I0116 18:01:04.447709 2796 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 16 18:01:04.453469 kubelet[2796]: I0116 18:01:04.453425 2796 server.go:1444] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 16 18:01:04.455993 kubelet[2796]: I0116 18:01:04.455883 2796 server.go:772] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 16 18:01:04.456177 kubelet[2796]: I0116 18:01:04.456129 2796 container_manager_linux.go:268] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 16 18:01:04.456402 kubelet[2796]: I0116 18:01:04.456169 2796 container_manager_linux.go:273] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4580-0-0-p-e4bb445d88","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"ExperimentalMemoryManagerPolicy":"None","ExperimentalMemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 16 18:01:04.456402 kubelet[2796]: I0116 18:01:04.456401 2796 topology_manager.go:138] "Creating topology manager with none policy" Jan 16 18:01:04.456533 kubelet[2796]: I0116 18:01:04.456411 2796 container_manager_linux.go:304] "Creating device plugin manager" Jan 16 18:01:04.456533 kubelet[2796]: I0116 18:01:04.456459 2796 state_mem.go:36] "Initialized new in-memory state store" Jan 16 18:01:04.456704 kubelet[2796]: I0116 18:01:04.456663 2796 kubelet.go:446] "Attempting to sync node with API server" Jan 16 18:01:04.457844 kubelet[2796]: I0116 18:01:04.456685 2796 kubelet.go:341] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 16 18:01:04.457974 kubelet[2796]: I0116 18:01:04.457879 2796 kubelet.go:352] "Adding apiserver pod source" Jan 16 18:01:04.457974 kubelet[2796]: I0116 18:01:04.457893 2796 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 16 18:01:04.460227 kubelet[2796]: I0116 18:01:04.460181 2796 kuberuntime_manager.go:269] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 16 18:01:04.463662 kubelet[2796]: I0116 18:01:04.461172 2796 kubelet.go:890] "Not starting ClusterTrustBundle informer because we are in static kubelet mode" Jan 16 18:01:04.463662 kubelet[2796]: I0116 18:01:04.461824 2796 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 16 18:01:04.463662 kubelet[2796]: I0116 18:01:04.461862 2796 server.go:1287] "Started kubelet" Jan 16 18:01:04.464650 kubelet[2796]: I0116 18:01:04.464611 2796 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 16 18:01:04.470152 kubelet[2796]: I0116 18:01:04.470077 2796 server.go:169] "Starting to listen" address="0.0.0.0" port=10250 Jan 16 18:01:04.471372 kubelet[2796]: I0116 18:01:04.471330 2796 server.go:479] "Adding debug handlers to kubelet server" Jan 16 18:01:04.474071 kubelet[2796]: I0116 18:01:04.473991 2796 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 16 18:01:04.474374 kubelet[2796]: I0116 18:01:04.474346 2796 server.go:243] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 16 18:01:04.475049 kubelet[2796]: I0116 18:01:04.475016 2796 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 16 18:01:04.478964 kubelet[2796]: I0116 18:01:04.478904 2796 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 16 18:01:04.479772 kubelet[2796]: E0116 18:01:04.479735 2796 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4580-0-0-p-e4bb445d88\" not found" Jan 16 18:01:04.488945 kubelet[2796]: I0116 18:01:04.488868 2796 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 16 18:01:04.489115 kubelet[2796]: I0116 18:01:04.489032 2796 reconciler.go:26] "Reconciler: start to sync state" Jan 16 18:01:04.492563 kubelet[2796]: I0116 18:01:04.492370 2796 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv4" Jan 16 18:01:04.494582 kubelet[2796]: I0116 18:01:04.494410 2796 kubelet_network_linux.go:50] "Initialized iptables rules." protocol="IPv6" Jan 16 18:01:04.494582 kubelet[2796]: I0116 18:01:04.494466 2796 status_manager.go:227] "Starting to sync pod status with apiserver" Jan 16 18:01:04.494582 kubelet[2796]: I0116 18:01:04.494491 2796 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 16 18:01:04.494582 kubelet[2796]: I0116 18:01:04.494498 2796 kubelet.go:2382] "Starting kubelet main sync loop" Jan 16 18:01:04.494582 kubelet[2796]: E0116 18:01:04.494581 2796 kubelet.go:2406] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 16 18:01:04.501668 kubelet[2796]: I0116 18:01:04.501172 2796 factory.go:219] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 16 18:01:04.507752 kubelet[2796]: E0116 18:01:04.507706 2796 kubelet.go:1555] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 16 18:01:04.507912 kubelet[2796]: I0116 18:01:04.507873 2796 factory.go:221] Registration of the containerd container factory successfully Jan 16 18:01:04.507912 kubelet[2796]: I0116 18:01:04.507900 2796 factory.go:221] Registration of the systemd container factory successfully Jan 16 18:01:04.588763 kubelet[2796]: I0116 18:01:04.588616 2796 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 16 18:01:04.588763 kubelet[2796]: I0116 18:01:04.588747 2796 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 16 18:01:04.588763 kubelet[2796]: I0116 18:01:04.588778 2796 state_mem.go:36] "Initialized new in-memory state store" Jan 16 18:01:04.589902 kubelet[2796]: I0116 18:01:04.589860 2796 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 16 18:01:04.589902 kubelet[2796]: I0116 18:01:04.589893 2796 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 16 18:01:04.590117 kubelet[2796]: I0116 18:01:04.589922 2796 policy_none.go:49] "None policy: Start" Jan 16 18:01:04.590148 kubelet[2796]: I0116 18:01:04.590126 2796 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 16 18:01:04.590148 kubelet[2796]: I0116 18:01:04.590144 2796 state_mem.go:35] "Initializing new in-memory state store" Jan 16 18:01:04.590415 kubelet[2796]: I0116 18:01:04.590338 2796 state_mem.go:75] "Updated machine memory state" Jan 16 18:01:04.594745 kubelet[2796]: E0116 18:01:04.594700 2796 kubelet.go:2406] "Skipping pod synchronization" err="container runtime status check may not have completed yet" Jan 16 18:01:04.597042 kubelet[2796]: I0116 18:01:04.596994 2796 manager.go:519] "Failed to read data from checkpoint" checkpoint="kubelet_internal_checkpoint" err="checkpoint is not found" Jan 16 18:01:04.597240 kubelet[2796]: I0116 18:01:04.597206 2796 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 16 18:01:04.597303 kubelet[2796]: I0116 18:01:04.597238 2796 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 16 18:01:04.603656 kubelet[2796]: I0116 18:01:04.601983 2796 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 16 18:01:04.606308 kubelet[2796]: E0116 18:01:04.606161 2796 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 16 18:01:04.709008 kubelet[2796]: I0116 18:01:04.708903 2796 kubelet_node_status.go:75] "Attempting to register node" node="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:04.727202 kubelet[2796]: I0116 18:01:04.727171 2796 kubelet_node_status.go:124] "Node was previously registered" node="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:04.730335 kubelet[2796]: I0116 18:01:04.729227 2796 kubelet_node_status.go:78] "Successfully registered node" node="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:04.795697 kubelet[2796]: I0116 18:01:04.795545 2796 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:04.796561 kubelet[2796]: I0116 18:01:04.796337 2796 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:04.799294 kubelet[2796]: I0116 18:01:04.799259 2796 kubelet.go:3194] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:04.890670 kubelet[2796]: I0116 18:01:04.890274 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/680c78a69ceec4d014b9ac2c8c9f439d-k8s-certs\") pod \"kube-apiserver-ci-4580-0-0-p-e4bb445d88\" (UID: \"680c78a69ceec4d014b9ac2c8c9f439d\") " pod="kube-system/kube-apiserver-ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:04.890670 kubelet[2796]: I0116 18:01:04.890369 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/680c78a69ceec4d014b9ac2c8c9f439d-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4580-0-0-p-e4bb445d88\" (UID: \"680c78a69ceec4d014b9ac2c8c9f439d\") " pod="kube-system/kube-apiserver-ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:04.890670 kubelet[2796]: I0116 18:01:04.890421 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/ef388b2e3a4a05631c55078f457a2431-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4580-0-0-p-e4bb445d88\" (UID: \"ef388b2e3a4a05631c55078f457a2431\") " pod="kube-system/kube-controller-manager-ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:04.890670 kubelet[2796]: I0116 18:01:04.890455 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/ef388b2e3a4a05631c55078f457a2431-kubeconfig\") pod \"kube-controller-manager-ci-4580-0-0-p-e4bb445d88\" (UID: \"ef388b2e3a4a05631c55078f457a2431\") " pod="kube-system/kube-controller-manager-ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:04.890670 kubelet[2796]: I0116 18:01:04.890540 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/8c866d1895cbdafa1e19ab254972b915-kubeconfig\") pod \"kube-scheduler-ci-4580-0-0-p-e4bb445d88\" (UID: \"8c866d1895cbdafa1e19ab254972b915\") " pod="kube-system/kube-scheduler-ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:04.891086 kubelet[2796]: I0116 18:01:04.890582 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/680c78a69ceec4d014b9ac2c8c9f439d-ca-certs\") pod \"kube-apiserver-ci-4580-0-0-p-e4bb445d88\" (UID: \"680c78a69ceec4d014b9ac2c8c9f439d\") " pod="kube-system/kube-apiserver-ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:04.891618 kubelet[2796]: I0116 18:01:04.891419 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/ef388b2e3a4a05631c55078f457a2431-ca-certs\") pod \"kube-controller-manager-ci-4580-0-0-p-e4bb445d88\" (UID: \"ef388b2e3a4a05631c55078f457a2431\") " pod="kube-system/kube-controller-manager-ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:04.891618 kubelet[2796]: I0116 18:01:04.891501 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/ef388b2e3a4a05631c55078f457a2431-flexvolume-dir\") pod \"kube-controller-manager-ci-4580-0-0-p-e4bb445d88\" (UID: \"ef388b2e3a4a05631c55078f457a2431\") " pod="kube-system/kube-controller-manager-ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:04.891618 kubelet[2796]: I0116 18:01:04.891537 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/ef388b2e3a4a05631c55078f457a2431-k8s-certs\") pod \"kube-controller-manager-ci-4580-0-0-p-e4bb445d88\" (UID: \"ef388b2e3a4a05631c55078f457a2431\") " pod="kube-system/kube-controller-manager-ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:05.458970 kubelet[2796]: I0116 18:01:05.458668 2796 apiserver.go:52] "Watching apiserver" Jan 16 18:01:05.489975 kubelet[2796]: I0116 18:01:05.489843 2796 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 16 18:01:05.640685 kubelet[2796]: I0116 18:01:05.640576 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4580-0-0-p-e4bb445d88" podStartSLOduration=1.640553701 podStartE2EDuration="1.640553701s" podCreationTimestamp="2026-01-16 18:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 18:01:05.616848281 +0000 UTC m=+1.234354345" watchObservedRunningTime="2026-01-16 18:01:05.640553701 +0000 UTC m=+1.258059765" Jan 16 18:01:05.685460 kubelet[2796]: I0116 18:01:05.685245 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4580-0-0-p-e4bb445d88" podStartSLOduration=1.6852228 podStartE2EDuration="1.6852228s" podCreationTimestamp="2026-01-16 18:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 18:01:05.641223418 +0000 UTC m=+1.258729522" watchObservedRunningTime="2026-01-16 18:01:05.6852228 +0000 UTC m=+1.302728864" Jan 16 18:01:05.716385 kubelet[2796]: I0116 18:01:05.715966 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4580-0-0-p-e4bb445d88" podStartSLOduration=1.715944204 podStartE2EDuration="1.715944204s" podCreationTimestamp="2026-01-16 18:01:04 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 18:01:05.686840735 +0000 UTC m=+1.304346799" watchObservedRunningTime="2026-01-16 18:01:05.715944204 +0000 UTC m=+1.333450268" Jan 16 18:01:10.127128 kubelet[2796]: I0116 18:01:10.127085 2796 kuberuntime_manager.go:1702] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 16 18:01:10.128513 containerd[1571]: time="2026-01-16T18:01:10.128451520Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 16 18:01:10.129133 kubelet[2796]: I0116 18:01:10.128737 2796 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 16 18:01:11.146336 systemd[1]: Created slice kubepods-besteffort-pod5204cb00_19c8_406b_9496_f6ada07b7565.slice - libcontainer container kubepods-besteffort-pod5204cb00_19c8_406b_9496_f6ada07b7565.slice. Jan 16 18:01:11.238184 kubelet[2796]: I0116 18:01:11.237951 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gbxt9\" (UniqueName: \"kubernetes.io/projected/5204cb00-19c8-406b-9496-f6ada07b7565-kube-api-access-gbxt9\") pod \"kube-proxy-9z4t8\" (UID: \"5204cb00-19c8-406b-9496-f6ada07b7565\") " pod="kube-system/kube-proxy-9z4t8" Jan 16 18:01:11.238184 kubelet[2796]: I0116 18:01:11.238005 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/5204cb00-19c8-406b-9496-f6ada07b7565-lib-modules\") pod \"kube-proxy-9z4t8\" (UID: \"5204cb00-19c8-406b-9496-f6ada07b7565\") " pod="kube-system/kube-proxy-9z4t8" Jan 16 18:01:11.238184 kubelet[2796]: I0116 18:01:11.238028 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/5204cb00-19c8-406b-9496-f6ada07b7565-kube-proxy\") pod \"kube-proxy-9z4t8\" (UID: \"5204cb00-19c8-406b-9496-f6ada07b7565\") " pod="kube-system/kube-proxy-9z4t8" Jan 16 18:01:11.238184 kubelet[2796]: I0116 18:01:11.238047 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/5204cb00-19c8-406b-9496-f6ada07b7565-xtables-lock\") pod \"kube-proxy-9z4t8\" (UID: \"5204cb00-19c8-406b-9496-f6ada07b7565\") " pod="kube-system/kube-proxy-9z4t8" Jan 16 18:01:11.273763 systemd[1]: Created slice kubepods-besteffort-podd0f5f4b5_351f_43b8_948d_bf01a56f47cf.slice - libcontainer container kubepods-besteffort-podd0f5f4b5_351f_43b8_948d_bf01a56f47cf.slice. Jan 16 18:01:11.340470 kubelet[2796]: I0116 18:01:11.339870 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/d0f5f4b5-351f-43b8-948d-bf01a56f47cf-var-lib-calico\") pod \"tigera-operator-7dcd859c48-6srkf\" (UID: \"d0f5f4b5-351f-43b8-948d-bf01a56f47cf\") " pod="tigera-operator/tigera-operator-7dcd859c48-6srkf" Jan 16 18:01:11.340470 kubelet[2796]: I0116 18:01:11.339951 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zfswm\" (UniqueName: \"kubernetes.io/projected/d0f5f4b5-351f-43b8-948d-bf01a56f47cf-kube-api-access-zfswm\") pod \"tigera-operator-7dcd859c48-6srkf\" (UID: \"d0f5f4b5-351f-43b8-948d-bf01a56f47cf\") " pod="tigera-operator/tigera-operator-7dcd859c48-6srkf" Jan 16 18:01:11.460689 containerd[1571]: time="2026-01-16T18:01:11.460520412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9z4t8,Uid:5204cb00-19c8-406b-9496-f6ada07b7565,Namespace:kube-system,Attempt:0,}" Jan 16 18:01:11.486502 containerd[1571]: time="2026-01-16T18:01:11.486356553Z" level=info msg="connecting to shim 38cc0a444b0edd4a59971bf5966b41d9944949555c0b15b37dede50abaadcab8" address="unix:///run/containerd/s/cc20636475f51ac02145f9d4ff86feb4921b99054ea89fe9d7779ba1df1b8c7d" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:01:11.515942 systemd[1]: Started cri-containerd-38cc0a444b0edd4a59971bf5966b41d9944949555c0b15b37dede50abaadcab8.scope - libcontainer container 38cc0a444b0edd4a59971bf5966b41d9944949555c0b15b37dede50abaadcab8. Jan 16 18:01:11.531000 audit: BPF prog-id=131 op=LOAD Jan 16 18:01:11.533864 kernel: kauditd_printk_skb: 32 callbacks suppressed Jan 16 18:01:11.533918 kernel: audit: type=1334 audit(1768586471.531:431): prog-id=131 op=LOAD Jan 16 18:01:11.533000 audit: BPF prog-id=132 op=LOAD Jan 16 18:01:11.536115 kernel: audit: type=1334 audit(1768586471.533:432): prog-id=132 op=LOAD Jan 16 18:01:11.536212 kernel: audit: type=1300 audit(1768586471.533:432): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2853 pid=2863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:11.533000 audit[2863]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=2853 pid=2863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:11.533000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338636330613434346230656464346135393937316266353936366234 Jan 16 18:01:11.540997 kernel: audit: type=1327 audit(1768586471.533:432): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338636330613434346230656464346135393937316266353936366234 Jan 16 18:01:11.541107 kernel: audit: type=1334 audit(1768586471.534:433): prog-id=132 op=UNLOAD Jan 16 18:01:11.534000 audit: BPF prog-id=132 op=UNLOAD Jan 16 18:01:11.534000 audit[2863]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2853 pid=2863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:11.543596 kernel: audit: type=1300 audit(1768586471.534:433): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2853 pid=2863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:11.546893 kernel: audit: type=1327 audit(1768586471.534:433): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338636330613434346230656464346135393937316266353936366234 Jan 16 18:01:11.546925 kernel: audit: type=1334 audit(1768586471.534:434): prog-id=133 op=LOAD Jan 16 18:01:11.546945 kernel: audit: type=1300 audit(1768586471.534:434): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2853 pid=2863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:11.534000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338636330613434346230656464346135393937316266353936366234 Jan 16 18:01:11.534000 audit: BPF prog-id=133 op=LOAD Jan 16 18:01:11.534000 audit[2863]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=2853 pid=2863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:11.534000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338636330613434346230656464346135393937316266353936366234 Jan 16 18:01:11.551825 kernel: audit: type=1327 audit(1768586471.534:434): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338636330613434346230656464346135393937316266353936366234 Jan 16 18:01:11.534000 audit: BPF prog-id=134 op=LOAD Jan 16 18:01:11.534000 audit[2863]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=2853 pid=2863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:11.534000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338636330613434346230656464346135393937316266353936366234 Jan 16 18:01:11.537000 audit: BPF prog-id=134 op=UNLOAD Jan 16 18:01:11.537000 audit[2863]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2853 pid=2863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:11.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338636330613434346230656464346135393937316266353936366234 Jan 16 18:01:11.537000 audit: BPF prog-id=133 op=UNLOAD Jan 16 18:01:11.537000 audit[2863]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2853 pid=2863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:11.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338636330613434346230656464346135393937316266353936366234 Jan 16 18:01:11.537000 audit: BPF prog-id=135 op=LOAD Jan 16 18:01:11.537000 audit[2863]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=2853 pid=2863 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:11.537000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3338636330613434346230656464346135393937316266353936366234 Jan 16 18:01:11.572741 containerd[1571]: time="2026-01-16T18:01:11.572381439Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-9z4t8,Uid:5204cb00-19c8-406b-9496-f6ada07b7565,Namespace:kube-system,Attempt:0,} returns sandbox id \"38cc0a444b0edd4a59971bf5966b41d9944949555c0b15b37dede50abaadcab8\"" Jan 16 18:01:11.578326 containerd[1571]: time="2026-01-16T18:01:11.578264979Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-6srkf,Uid:d0f5f4b5-351f-43b8-948d-bf01a56f47cf,Namespace:tigera-operator,Attempt:0,}" Jan 16 18:01:11.578726 containerd[1571]: time="2026-01-16T18:01:11.578298458Z" level=info msg="CreateContainer within sandbox \"38cc0a444b0edd4a59971bf5966b41d9944949555c0b15b37dede50abaadcab8\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 16 18:01:11.595553 containerd[1571]: time="2026-01-16T18:01:11.595423663Z" level=info msg="Container dc28647324876a1a707bdef3ccb38d76dd12dfd5ce59c18437ba3cc3bb729ad5: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:01:11.608504 containerd[1571]: time="2026-01-16T18:01:11.608449608Z" level=info msg="connecting to shim 45f77973c6a577a5d0b58f5e1ab9673cd1914487b4fc88826213e75cd01a4898" address="unix:///run/containerd/s/ce924340609ad11cb08b8f373a7bb33affed250017f8dcbc0ea33d1abc680e32" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:01:11.611613 containerd[1571]: time="2026-01-16T18:01:11.611562231Z" level=info msg="CreateContainer within sandbox \"38cc0a444b0edd4a59971bf5966b41d9944949555c0b15b37dede50abaadcab8\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"dc28647324876a1a707bdef3ccb38d76dd12dfd5ce59c18437ba3cc3bb729ad5\"" Jan 16 18:01:11.613790 containerd[1571]: time="2026-01-16T18:01:11.613524544Z" level=info msg="StartContainer for \"dc28647324876a1a707bdef3ccb38d76dd12dfd5ce59c18437ba3cc3bb729ad5\"" Jan 16 18:01:11.618044 containerd[1571]: time="2026-01-16T18:01:11.617731999Z" level=info msg="connecting to shim dc28647324876a1a707bdef3ccb38d76dd12dfd5ce59c18437ba3cc3bb729ad5" address="unix:///run/containerd/s/cc20636475f51ac02145f9d4ff86feb4921b99054ea89fe9d7779ba1df1b8c7d" protocol=ttrpc version=3 Jan 16 18:01:11.645987 systemd[1]: Started cri-containerd-dc28647324876a1a707bdef3ccb38d76dd12dfd5ce59c18437ba3cc3bb729ad5.scope - libcontainer container dc28647324876a1a707bdef3ccb38d76dd12dfd5ce59c18437ba3cc3bb729ad5. Jan 16 18:01:11.657967 systemd[1]: Started cri-containerd-45f77973c6a577a5d0b58f5e1ab9673cd1914487b4fc88826213e75cd01a4898.scope - libcontainer container 45f77973c6a577a5d0b58f5e1ab9673cd1914487b4fc88826213e75cd01a4898. Jan 16 18:01:11.671000 audit: BPF prog-id=136 op=LOAD Jan 16 18:01:11.674000 audit: BPF prog-id=137 op=LOAD Jan 16 18:01:11.674000 audit[2913]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c180 a2=98 a3=0 items=0 ppid=2900 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:11.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435663737393733633661353737613564306235386635653161623936 Jan 16 18:01:11.674000 audit: BPF prog-id=137 op=UNLOAD Jan 16 18:01:11.674000 audit[2913]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2900 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:11.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435663737393733633661353737613564306235386635653161623936 Jan 16 18:01:11.674000 audit: BPF prog-id=138 op=LOAD Jan 16 18:01:11.674000 audit[2913]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c3e8 a2=98 a3=0 items=0 ppid=2900 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:11.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435663737393733633661353737613564306235386635653161623936 Jan 16 18:01:11.674000 audit: BPF prog-id=139 op=LOAD Jan 16 18:01:11.674000 audit[2913]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=400010c168 a2=98 a3=0 items=0 ppid=2900 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:11.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435663737393733633661353737613564306235386635653161623936 Jan 16 18:01:11.674000 audit: BPF prog-id=139 op=UNLOAD Jan 16 18:01:11.674000 audit[2913]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2900 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:11.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435663737393733633661353737613564306235386635653161623936 Jan 16 18:01:11.674000 audit: BPF prog-id=138 op=UNLOAD Jan 16 18:01:11.674000 audit[2913]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2900 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:11.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435663737393733633661353737613564306235386635653161623936 Jan 16 18:01:11.674000 audit: BPF prog-id=140 op=LOAD Jan 16 18:01:11.674000 audit[2913]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=400010c648 a2=98 a3=0 items=0 ppid=2900 pid=2913 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:11.674000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3435663737393733633661353737613564306235386635653161623936 Jan 16 18:01:11.697000 audit: BPF prog-id=141 op=LOAD Jan 16 18:01:11.697000 audit[2911]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2853 pid=2911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:11.697000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463323836343733323438373661316137303762646566336363623338 Jan 16 18:01:11.697000 audit: BPF prog-id=142 op=LOAD Jan 16 18:01:11.697000 audit[2911]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2853 pid=2911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:11.697000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463323836343733323438373661316137303762646566336363623338 Jan 16 18:01:11.698000 audit: BPF prog-id=142 op=UNLOAD Jan 16 18:01:11.698000 audit[2911]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2853 pid=2911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:11.698000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463323836343733323438373661316137303762646566336363623338 Jan 16 18:01:11.698000 audit: BPF prog-id=141 op=UNLOAD Jan 16 18:01:11.698000 audit[2911]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2853 pid=2911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:11.698000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463323836343733323438373661316137303762646566336363623338 Jan 16 18:01:11.698000 audit: BPF prog-id=143 op=LOAD Jan 16 18:01:11.698000 audit[2911]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2853 pid=2911 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:11.698000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6463323836343733323438373661316137303762646566336363623338 Jan 16 18:01:11.719324 containerd[1571]: time="2026-01-16T18:01:11.719202444Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-6srkf,Uid:d0f5f4b5-351f-43b8-948d-bf01a56f47cf,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"45f77973c6a577a5d0b58f5e1ab9673cd1914487b4fc88826213e75cd01a4898\"" Jan 16 18:01:11.727402 containerd[1571]: time="2026-01-16T18:01:11.727357804Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 16 18:01:11.742191 containerd[1571]: time="2026-01-16T18:01:11.742139313Z" level=info msg="StartContainer for \"dc28647324876a1a707bdef3ccb38d76dd12dfd5ce59c18437ba3cc3bb729ad5\" returns successfully" Jan 16 18:01:11.959000 audit[2999]: NETFILTER_CFG table=mangle:54 family=2 entries=1 op=nft_register_chain pid=2999 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:01:11.959000 audit[2999]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe3631af0 a2=0 a3=1 items=0 ppid=2936 pid=2999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:11.959000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 16 18:01:11.962000 audit[3000]: NETFILTER_CFG table=mangle:55 family=10 entries=1 op=nft_register_chain pid=3000 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:01:11.962000 audit[3000]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc98b9bd0 a2=0 a3=1 items=0 ppid=2936 pid=3000 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:11.962000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 16 18:01:11.967000 audit[3001]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=3001 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:01:11.967000 audit[3001]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff8c53d80 a2=0 a3=1 items=0 ppid=2936 pid=3001 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:11.967000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 16 18:01:11.970000 audit[3002]: NETFILTER_CFG table=nat:57 family=10 entries=1 op=nft_register_chain pid=3002 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:01:11.970000 audit[3002]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffa03ce30 a2=0 a3=1 items=0 ppid=2936 pid=3002 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:11.970000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 16 18:01:11.970000 audit[3003]: NETFILTER_CFG table=filter:58 family=2 entries=1 op=nft_register_chain pid=3003 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:01:11.970000 audit[3003]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffcc3c2950 a2=0 a3=1 items=0 ppid=2936 pid=3003 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:11.970000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 16 18:01:11.974000 audit[3004]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3004 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:01:11.974000 audit[3004]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc8364ce0 a2=0 a3=1 items=0 ppid=2936 pid=3004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:11.974000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 16 18:01:12.064000 audit[3005]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3005 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:01:12.064000 audit[3005]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc7577350 a2=0 a3=1 items=0 ppid=2936 pid=3005 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.064000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 16 18:01:12.070000 audit[3007]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3007 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:01:12.070000 audit[3007]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffeded9030 a2=0 a3=1 items=0 ppid=2936 pid=3007 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.070000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 16 18:01:12.076000 audit[3010]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3010 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:01:12.076000 audit[3010]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc18ed010 a2=0 a3=1 items=0 ppid=2936 pid=3010 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.076000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 16 18:01:12.078000 audit[3011]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3011 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:01:12.078000 audit[3011]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff0cfd6d0 a2=0 a3=1 items=0 ppid=2936 pid=3011 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.078000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 16 18:01:12.085000 audit[3013]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3013 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:01:12.085000 audit[3013]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffcfcafc10 a2=0 a3=1 items=0 ppid=2936 pid=3013 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.085000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 16 18:01:12.088000 audit[3014]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3014 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:01:12.088000 audit[3014]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffeb3b5500 a2=0 a3=1 items=0 ppid=2936 pid=3014 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.088000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 16 18:01:12.091000 audit[3016]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3016 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:01:12.091000 audit[3016]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffd61acee0 a2=0 a3=1 items=0 ppid=2936 pid=3016 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.091000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 16 18:01:12.098000 audit[3019]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3019 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:01:12.098000 audit[3019]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffed193cf0 a2=0 a3=1 items=0 ppid=2936 pid=3019 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.098000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 16 18:01:12.100000 audit[3020]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3020 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:01:12.100000 audit[3020]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff2361d20 a2=0 a3=1 items=0 ppid=2936 pid=3020 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.100000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 16 18:01:12.104000 audit[3022]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3022 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:01:12.104000 audit[3022]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffbf278a0 a2=0 a3=1 items=0 ppid=2936 pid=3022 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.104000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 16 18:01:12.106000 audit[3023]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3023 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:01:12.106000 audit[3023]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff1c71da0 a2=0 a3=1 items=0 ppid=2936 pid=3023 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.106000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 16 18:01:12.109000 audit[3025]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3025 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:01:12.109000 audit[3025]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffd8437420 a2=0 a3=1 items=0 ppid=2936 pid=3025 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.109000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 16 18:01:12.114000 audit[3028]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3028 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:01:12.114000 audit[3028]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdf4d28b0 a2=0 a3=1 items=0 ppid=2936 pid=3028 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.114000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 16 18:01:12.118000 audit[3031]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3031 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:01:12.118000 audit[3031]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffe6bd2a40 a2=0 a3=1 items=0 ppid=2936 pid=3031 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.118000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 16 18:01:12.120000 audit[3032]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3032 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:01:12.120000 audit[3032]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffff99955c0 a2=0 a3=1 items=0 ppid=2936 pid=3032 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.120000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 16 18:01:12.123000 audit[3034]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3034 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:01:12.123000 audit[3034]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffc3cd9ba0 a2=0 a3=1 items=0 ppid=2936 pid=3034 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.123000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 16 18:01:12.127000 audit[3037]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3037 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:01:12.127000 audit[3037]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffd245010 a2=0 a3=1 items=0 ppid=2936 pid=3037 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.127000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 16 18:01:12.129000 audit[3038]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:01:12.129000 audit[3038]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd964a060 a2=0 a3=1 items=0 ppid=2936 pid=3038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.129000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 16 18:01:12.132000 audit[3040]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 16 18:01:12.132000 audit[3040]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=fffff995b480 a2=0 a3=1 items=0 ppid=2936 pid=3040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.132000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 16 18:01:12.164000 audit[3046]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3046 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:12.164000 audit[3046]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffff361500 a2=0 a3=1 items=0 ppid=2936 pid=3046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.164000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:12.172000 audit[3046]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3046 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:12.172000 audit[3046]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffff361500 a2=0 a3=1 items=0 ppid=2936 pid=3046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.172000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:12.175000 audit[3051]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3051 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:01:12.175000 audit[3051]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffc17948b0 a2=0 a3=1 items=0 ppid=2936 pid=3051 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.175000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 16 18:01:12.179000 audit[3053]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3053 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:01:12.179000 audit[3053]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffd1f9a8b0 a2=0 a3=1 items=0 ppid=2936 pid=3053 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.179000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 16 18:01:12.186000 audit[3056]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3056 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:01:12.186000 audit[3056]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff9d81f50 a2=0 a3=1 items=0 ppid=2936 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.186000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 16 18:01:12.188000 audit[3057]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3057 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:01:12.188000 audit[3057]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdbbb8600 a2=0 a3=1 items=0 ppid=2936 pid=3057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.188000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 16 18:01:12.193000 audit[3059]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3059 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:01:12.193000 audit[3059]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff9684ec0 a2=0 a3=1 items=0 ppid=2936 pid=3059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.193000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 16 18:01:12.194000 audit[3060]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3060 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:01:12.194000 audit[3060]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffce80e7b0 a2=0 a3=1 items=0 ppid=2936 pid=3060 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.194000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 16 18:01:12.201000 audit[3062]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3062 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:01:12.201000 audit[3062]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=fffff1980be0 a2=0 a3=1 items=0 ppid=2936 pid=3062 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.201000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 16 18:01:12.206000 audit[3065]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3065 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:01:12.206000 audit[3065]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffc5448360 a2=0 a3=1 items=0 ppid=2936 pid=3065 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.206000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 16 18:01:12.207000 audit[3066]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3066 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:01:12.207000 audit[3066]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffa8109c0 a2=0 a3=1 items=0 ppid=2936 pid=3066 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.207000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 16 18:01:12.212000 audit[3068]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3068 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:01:12.212000 audit[3068]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff6d7bac0 a2=0 a3=1 items=0 ppid=2936 pid=3068 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.212000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 16 18:01:12.213000 audit[3069]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3069 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:01:12.213000 audit[3069]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe99cda10 a2=0 a3=1 items=0 ppid=2936 pid=3069 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.213000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 16 18:01:12.216000 audit[3071]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3071 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:01:12.216000 audit[3071]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff560b090 a2=0 a3=1 items=0 ppid=2936 pid=3071 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.216000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 16 18:01:12.223000 audit[3074]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3074 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:01:12.223000 audit[3074]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff1a95bd0 a2=0 a3=1 items=0 ppid=2936 pid=3074 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.223000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 16 18:01:12.234000 audit[3077]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3077 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:01:12.234000 audit[3077]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffdcd11ed0 a2=0 a3=1 items=0 ppid=2936 pid=3077 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.234000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 16 18:01:12.236000 audit[3078]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3078 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:01:12.236000 audit[3078]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd64f1140 a2=0 a3=1 items=0 ppid=2936 pid=3078 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.236000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 16 18:01:12.243000 audit[3080]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3080 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:01:12.243000 audit[3080]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffe9639120 a2=0 a3=1 items=0 ppid=2936 pid=3080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.243000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 16 18:01:12.250000 audit[3083]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3083 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:01:12.250000 audit[3083]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffe136dea0 a2=0 a3=1 items=0 ppid=2936 pid=3083 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.250000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 16 18:01:12.252000 audit[3084]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3084 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:01:12.252000 audit[3084]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd2d15a00 a2=0 a3=1 items=0 ppid=2936 pid=3084 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.252000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 16 18:01:12.256000 audit[3086]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3086 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:01:12.256000 audit[3086]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=ffffdc87b000 a2=0 a3=1 items=0 ppid=2936 pid=3086 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.256000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 16 18:01:12.259000 audit[3087]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3087 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:01:12.259000 audit[3087]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd1fc4810 a2=0 a3=1 items=0 ppid=2936 pid=3087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.259000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 16 18:01:12.263000 audit[3089]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3089 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:01:12.263000 audit[3089]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffed1e1670 a2=0 a3=1 items=0 ppid=2936 pid=3089 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.263000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 16 18:01:12.274000 audit[3092]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3092 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 16 18:01:12.274000 audit[3092]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=fffffb8726b0 a2=0 a3=1 items=0 ppid=2936 pid=3092 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.274000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 16 18:01:12.279000 audit[3094]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 16 18:01:12.279000 audit[3094]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=ffffe8555520 a2=0 a3=1 items=0 ppid=2936 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.279000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:12.279000 audit[3094]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3094 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 16 18:01:12.279000 audit[3094]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=ffffe8555520 a2=0 a3=1 items=0 ppid=2936 pid=3094 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:12.279000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:13.991025 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2682762565.mount: Deactivated successfully. Jan 16 18:01:15.108696 containerd[1571]: time="2026-01-16T18:01:15.108319834Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:01:15.111239 containerd[1571]: time="2026-01-16T18:01:15.110914625Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Jan 16 18:01:15.112878 containerd[1571]: time="2026-01-16T18:01:15.112799721Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:01:15.119401 containerd[1571]: time="2026-01-16T18:01:15.119246941Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:01:15.120229 containerd[1571]: time="2026-01-16T18:01:15.120187669Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 3.392782507s" Jan 16 18:01:15.120380 containerd[1571]: time="2026-01-16T18:01:15.120365503Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 16 18:01:15.125353 containerd[1571]: time="2026-01-16T18:01:15.125314935Z" level=info msg="CreateContainer within sandbox \"45f77973c6a577a5d0b58f5e1ab9673cd1914487b4fc88826213e75cd01a4898\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 16 18:01:15.144648 containerd[1571]: time="2026-01-16T18:01:15.144329247Z" level=info msg="Container 20f01b210028381a979aa846377fb663d2b7e57d613c82a3eb881d54d7efc135: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:01:15.157049 containerd[1571]: time="2026-01-16T18:01:15.156956977Z" level=info msg="CreateContainer within sandbox \"45f77973c6a577a5d0b58f5e1ab9673cd1914487b4fc88826213e75cd01a4898\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"20f01b210028381a979aa846377fb663d2b7e57d613c82a3eb881d54d7efc135\"" Jan 16 18:01:15.159739 containerd[1571]: time="2026-01-16T18:01:15.158204294Z" level=info msg="StartContainer for \"20f01b210028381a979aa846377fb663d2b7e57d613c82a3eb881d54d7efc135\"" Jan 16 18:01:15.160749 containerd[1571]: time="2026-01-16T18:01:15.160702289Z" level=info msg="connecting to shim 20f01b210028381a979aa846377fb663d2b7e57d613c82a3eb881d54d7efc135" address="unix:///run/containerd/s/ce924340609ad11cb08b8f373a7bb33affed250017f8dcbc0ea33d1abc680e32" protocol=ttrpc version=3 Jan 16 18:01:15.189929 systemd[1]: Started cri-containerd-20f01b210028381a979aa846377fb663d2b7e57d613c82a3eb881d54d7efc135.scope - libcontainer container 20f01b210028381a979aa846377fb663d2b7e57d613c82a3eb881d54d7efc135. Jan 16 18:01:15.204000 audit: BPF prog-id=144 op=LOAD Jan 16 18:01:15.205000 audit: BPF prog-id=145 op=LOAD Jan 16 18:01:15.205000 audit[3104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2900 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:15.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230663031623231303032383338316139373961613834363337376662 Jan 16 18:01:15.205000 audit: BPF prog-id=145 op=UNLOAD Jan 16 18:01:15.205000 audit[3104]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2900 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:15.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230663031623231303032383338316139373961613834363337376662 Jan 16 18:01:15.205000 audit: BPF prog-id=146 op=LOAD Jan 16 18:01:15.205000 audit[3104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2900 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:15.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230663031623231303032383338316139373961613834363337376662 Jan 16 18:01:15.205000 audit: BPF prog-id=147 op=LOAD Jan 16 18:01:15.205000 audit[3104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2900 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:15.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230663031623231303032383338316139373961613834363337376662 Jan 16 18:01:15.205000 audit: BPF prog-id=147 op=UNLOAD Jan 16 18:01:15.205000 audit[3104]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2900 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:15.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230663031623231303032383338316139373961613834363337376662 Jan 16 18:01:15.205000 audit: BPF prog-id=146 op=UNLOAD Jan 16 18:01:15.205000 audit[3104]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2900 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:15.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230663031623231303032383338316139373961613834363337376662 Jan 16 18:01:15.205000 audit: BPF prog-id=148 op=LOAD Jan 16 18:01:15.205000 audit[3104]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2900 pid=3104 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:15.205000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3230663031623231303032383338316139373961613834363337376662 Jan 16 18:01:15.230255 containerd[1571]: time="2026-01-16T18:01:15.230132364Z" level=info msg="StartContainer for \"20f01b210028381a979aa846377fb663d2b7e57d613c82a3eb881d54d7efc135\" returns successfully" Jan 16 18:01:15.618593 kubelet[2796]: I0116 18:01:15.618418 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-9z4t8" podStartSLOduration=4.618396618 podStartE2EDuration="4.618396618s" podCreationTimestamp="2026-01-16 18:01:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 18:01:12.608650652 +0000 UTC m=+8.226156876" watchObservedRunningTime="2026-01-16 18:01:15.618396618 +0000 UTC m=+11.235902682" Jan 16 18:01:16.162261 kubelet[2796]: I0116 18:01:16.161924 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-6srkf" podStartSLOduration=1.7651296969999999 podStartE2EDuration="5.161899565s" podCreationTimestamp="2026-01-16 18:01:11 +0000 UTC" firstStartedPulling="2026-01-16 18:01:11.725121183 +0000 UTC m=+7.342627247" lastFinishedPulling="2026-01-16 18:01:15.121891051 +0000 UTC m=+10.739397115" observedRunningTime="2026-01-16 18:01:15.620710219 +0000 UTC m=+11.238216363" watchObservedRunningTime="2026-01-16 18:01:16.161899565 +0000 UTC m=+11.779405669" Jan 16 18:01:21.751489 sudo[1867]: pam_unix(sudo:session): session closed for user root Jan 16 18:01:21.750000 audit[1867]: USER_END pid=1867 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 18:01:21.754346 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 16 18:01:21.754445 kernel: audit: type=1106 audit(1768586481.750:511): pid=1867 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 18:01:21.751000 audit[1867]: CRED_DISP pid=1867 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 18:01:21.757078 kernel: audit: type=1104 audit(1768586481.751:512): pid=1867 uid=500 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 16 18:01:21.853343 sshd[1866]: Connection closed by 68.220.241.50 port 54040 Jan 16 18:01:21.854019 sshd-session[1862]: pam_unix(sshd:session): session closed for user core Jan 16 18:01:21.854000 audit[1862]: USER_END pid=1862 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:01:21.854000 audit[1862]: CRED_DISP pid=1862 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:01:21.864940 kernel: audit: type=1106 audit(1768586481.854:513): pid=1862 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:01:21.865038 kernel: audit: type=1104 audit(1768586481.854:514): pid=1862 uid=0 auid=500 ses=8 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:01:21.864911 systemd-logind[1548]: Session 8 logged out. Waiting for processes to exit. Jan 16 18:01:21.866698 kernel: audit: type=1131 audit(1768586481.864:515): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-167.235.246.183:22-68.220.241.50:54040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:21.864000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@6-167.235.246.183:22-68.220.241.50:54040 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:01:21.865598 systemd[1]: sshd@6-167.235.246.183:22-68.220.241.50:54040.service: Deactivated successfully. Jan 16 18:01:21.873571 systemd[1]: session-8.scope: Deactivated successfully. Jan 16 18:01:21.874170 systemd[1]: session-8.scope: Consumed 7.172s CPU time, 218M memory peak. Jan 16 18:01:21.877863 systemd-logind[1548]: Removed session 8. Jan 16 18:01:23.649000 audit[3182]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3182 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:23.649000 audit[3182]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd73f9e80 a2=0 a3=1 items=0 ppid=2936 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:23.654811 kernel: audit: type=1325 audit(1768586483.649:516): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3182 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:23.654967 kernel: audit: type=1300 audit(1768586483.649:516): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd73f9e80 a2=0 a3=1 items=0 ppid=2936 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:23.649000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:23.657573 kernel: audit: type=1327 audit(1768586483.649:516): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:23.659000 audit[3182]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3182 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:23.659000 audit[3182]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd73f9e80 a2=0 a3=1 items=0 ppid=2936 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:23.664035 kernel: audit: type=1325 audit(1768586483.659:517): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3182 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:23.664138 kernel: audit: type=1300 audit(1768586483.659:517): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd73f9e80 a2=0 a3=1 items=0 ppid=2936 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:23.659000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:23.679000 audit[3184]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3184 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:23.679000 audit[3184]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffe2efdb70 a2=0 a3=1 items=0 ppid=2936 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:23.679000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:23.686000 audit[3184]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3184 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:23.686000 audit[3184]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe2efdb70 a2=0 a3=1 items=0 ppid=2936 pid=3184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:23.686000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:27.495000 audit[3186]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3186 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:27.496670 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 16 18:01:27.496726 kernel: audit: type=1325 audit(1768586487.495:520): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3186 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:27.495000 audit[3186]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc6100e60 a2=0 a3=1 items=0 ppid=2936 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:27.495000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:27.505065 kernel: audit: type=1300 audit(1768586487.495:520): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffc6100e60 a2=0 a3=1 items=0 ppid=2936 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:27.505150 kernel: audit: type=1327 audit(1768586487.495:520): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:27.506000 audit[3186]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3186 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:27.506000 audit[3186]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc6100e60 a2=0 a3=1 items=0 ppid=2936 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:27.511602 kernel: audit: type=1325 audit(1768586487.506:521): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3186 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:27.511686 kernel: audit: type=1300 audit(1768586487.506:521): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc6100e60 a2=0 a3=1 items=0 ppid=2936 pid=3186 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:27.506000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:27.516638 kernel: audit: type=1327 audit(1768586487.506:521): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:27.529000 audit[3188]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3188 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:27.529000 audit[3188]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffd7b420d0 a2=0 a3=1 items=0 ppid=2936 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:27.535037 kernel: audit: type=1325 audit(1768586487.529:522): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3188 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:27.535210 kernel: audit: type=1300 audit(1768586487.529:522): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffd7b420d0 a2=0 a3=1 items=0 ppid=2936 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:27.529000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:27.537250 kernel: audit: type=1327 audit(1768586487.529:522): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:27.537000 audit[3188]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3188 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:27.540646 kernel: audit: type=1325 audit(1768586487.537:523): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3188 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:27.537000 audit[3188]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd7b420d0 a2=0 a3=1 items=0 ppid=2936 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:27.537000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:28.588000 audit[3191]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3191 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:28.588000 audit[3191]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffea61dc70 a2=0 a3=1 items=0 ppid=2936 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:28.588000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:28.601000 audit[3191]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3191 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:28.601000 audit[3191]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffea61dc70 a2=0 a3=1 items=0 ppid=2936 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:28.601000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:31.529191 systemd[1]: Created slice kubepods-besteffort-pod8e2291e9_e15b_400e_b0bd_2becb04ee371.slice - libcontainer container kubepods-besteffort-pod8e2291e9_e15b_400e_b0bd_2becb04ee371.slice. Jan 16 18:01:31.556000 audit[3193]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3193 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:31.556000 audit[3193]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffe606ecc0 a2=0 a3=1 items=0 ppid=2936 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:31.556000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:31.561000 audit[3193]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3193 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:31.561000 audit[3193]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffe606ecc0 a2=0 a3=1 items=0 ppid=2936 pid=3193 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:31.561000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:31.578468 kubelet[2796]: I0116 18:01:31.578414 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/8e2291e9-e15b-400e-b0bd-2becb04ee371-typha-certs\") pod \"calico-typha-7cf9f575d-5z4qk\" (UID: \"8e2291e9-e15b-400e-b0bd-2becb04ee371\") " pod="calico-system/calico-typha-7cf9f575d-5z4qk" Jan 16 18:01:31.579196 kubelet[2796]: I0116 18:01:31.579089 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n4nfn\" (UniqueName: \"kubernetes.io/projected/8e2291e9-e15b-400e-b0bd-2becb04ee371-kube-api-access-n4nfn\") pod \"calico-typha-7cf9f575d-5z4qk\" (UID: \"8e2291e9-e15b-400e-b0bd-2becb04ee371\") " pod="calico-system/calico-typha-7cf9f575d-5z4qk" Jan 16 18:01:31.579196 kubelet[2796]: I0116 18:01:31.579135 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8e2291e9-e15b-400e-b0bd-2becb04ee371-tigera-ca-bundle\") pod \"calico-typha-7cf9f575d-5z4qk\" (UID: \"8e2291e9-e15b-400e-b0bd-2becb04ee371\") " pod="calico-system/calico-typha-7cf9f575d-5z4qk" Jan 16 18:01:31.602000 audit[3195]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3195 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:31.602000 audit[3195]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffc09b48d0 a2=0 a3=1 items=0 ppid=2936 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:31.602000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:31.610000 audit[3195]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3195 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:31.610000 audit[3195]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc09b48d0 a2=0 a3=1 items=0 ppid=2936 pid=3195 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:31.610000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:31.798166 systemd[1]: Created slice kubepods-besteffort-podf4b10039_dc30_42c9_8b71_dc0e94181caa.slice - libcontainer container kubepods-besteffort-podf4b10039_dc30_42c9_8b71_dc0e94181caa.slice. Jan 16 18:01:31.836133 containerd[1571]: time="2026-01-16T18:01:31.836093108Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cf9f575d-5z4qk,Uid:8e2291e9-e15b-400e-b0bd-2becb04ee371,Namespace:calico-system,Attempt:0,}" Jan 16 18:01:31.878428 containerd[1571]: time="2026-01-16T18:01:31.878339835Z" level=info msg="connecting to shim cde8565d231a32ff42732e5ea940085b6584ecf9cebbfb309a97651f84b82215" address="unix:///run/containerd/s/331c951073c8e4f44785c97a2c49abbca2d8c30f7a42d5d11dc829fecc92ea81" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:01:31.881159 kubelet[2796]: I0116 18:01:31.880919 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f4b10039-dc30-42c9-8b71-dc0e94181caa-tigera-ca-bundle\") pod \"calico-node-v7kmr\" (UID: \"f4b10039-dc30-42c9-8b71-dc0e94181caa\") " pod="calico-system/calico-node-v7kmr" Jan 16 18:01:31.881159 kubelet[2796]: I0116 18:01:31.881121 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/f4b10039-dc30-42c9-8b71-dc0e94181caa-var-lib-calico\") pod \"calico-node-v7kmr\" (UID: \"f4b10039-dc30-42c9-8b71-dc0e94181caa\") " pod="calico-system/calico-node-v7kmr" Jan 16 18:01:31.881687 kubelet[2796]: I0116 18:01:31.881373 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f4b10039-dc30-42c9-8b71-dc0e94181caa-xtables-lock\") pod \"calico-node-v7kmr\" (UID: \"f4b10039-dc30-42c9-8b71-dc0e94181caa\") " pod="calico-system/calico-node-v7kmr" Jan 16 18:01:31.881687 kubelet[2796]: I0116 18:01:31.881452 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/f4b10039-dc30-42c9-8b71-dc0e94181caa-cni-bin-dir\") pod \"calico-node-v7kmr\" (UID: \"f4b10039-dc30-42c9-8b71-dc0e94181caa\") " pod="calico-system/calico-node-v7kmr" Jan 16 18:01:31.882506 kubelet[2796]: I0116 18:01:31.881805 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/f4b10039-dc30-42c9-8b71-dc0e94181caa-cni-net-dir\") pod \"calico-node-v7kmr\" (UID: \"f4b10039-dc30-42c9-8b71-dc0e94181caa\") " pod="calico-system/calico-node-v7kmr" Jan 16 18:01:31.882506 kubelet[2796]: I0116 18:01:31.882204 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/f4b10039-dc30-42c9-8b71-dc0e94181caa-node-certs\") pod \"calico-node-v7kmr\" (UID: \"f4b10039-dc30-42c9-8b71-dc0e94181caa\") " pod="calico-system/calico-node-v7kmr" Jan 16 18:01:31.882783 kubelet[2796]: I0116 18:01:31.882735 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/f4b10039-dc30-42c9-8b71-dc0e94181caa-cni-log-dir\") pod \"calico-node-v7kmr\" (UID: \"f4b10039-dc30-42c9-8b71-dc0e94181caa\") " pod="calico-system/calico-node-v7kmr" Jan 16 18:01:31.883288 kubelet[2796]: I0116 18:01:31.882768 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/f4b10039-dc30-42c9-8b71-dc0e94181caa-var-run-calico\") pod \"calico-node-v7kmr\" (UID: \"f4b10039-dc30-42c9-8b71-dc0e94181caa\") " pod="calico-system/calico-node-v7kmr" Jan 16 18:01:31.883288 kubelet[2796]: I0116 18:01:31.883134 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/f4b10039-dc30-42c9-8b71-dc0e94181caa-flexvol-driver-host\") pod \"calico-node-v7kmr\" (UID: \"f4b10039-dc30-42c9-8b71-dc0e94181caa\") " pod="calico-system/calico-node-v7kmr" Jan 16 18:01:31.883288 kubelet[2796]: I0116 18:01:31.883162 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f4b10039-dc30-42c9-8b71-dc0e94181caa-lib-modules\") pod \"calico-node-v7kmr\" (UID: \"f4b10039-dc30-42c9-8b71-dc0e94181caa\") " pod="calico-system/calico-node-v7kmr" Jan 16 18:01:31.883288 kubelet[2796]: I0116 18:01:31.883182 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dbz95\" (UniqueName: \"kubernetes.io/projected/f4b10039-dc30-42c9-8b71-dc0e94181caa-kube-api-access-dbz95\") pod \"calico-node-v7kmr\" (UID: \"f4b10039-dc30-42c9-8b71-dc0e94181caa\") " pod="calico-system/calico-node-v7kmr" Jan 16 18:01:31.883288 kubelet[2796]: I0116 18:01:31.883203 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/f4b10039-dc30-42c9-8b71-dc0e94181caa-policysync\") pod \"calico-node-v7kmr\" (UID: \"f4b10039-dc30-42c9-8b71-dc0e94181caa\") " pod="calico-system/calico-node-v7kmr" Jan 16 18:01:31.933637 kubelet[2796]: E0116 18:01:31.932810 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lnrx8" podUID="30371a55-b2a2-4dfe-86cf-86b9aadb477e" Jan 16 18:01:31.933389 systemd[1]: Started cri-containerd-cde8565d231a32ff42732e5ea940085b6584ecf9cebbfb309a97651f84b82215.scope - libcontainer container cde8565d231a32ff42732e5ea940085b6584ecf9cebbfb309a97651f84b82215. Jan 16 18:01:31.968000 audit: BPF prog-id=149 op=LOAD Jan 16 18:01:31.971000 audit: BPF prog-id=150 op=LOAD Jan 16 18:01:31.971000 audit[3217]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=3206 pid=3217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:31.971000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364653835363564323331613332666634323733326535656139343030 Jan 16 18:01:31.972000 audit: BPF prog-id=150 op=UNLOAD Jan 16 18:01:31.972000 audit[3217]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3206 pid=3217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:31.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364653835363564323331613332666634323733326535656139343030 Jan 16 18:01:31.972000 audit: BPF prog-id=151 op=LOAD Jan 16 18:01:31.972000 audit[3217]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=3206 pid=3217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:31.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364653835363564323331613332666634323733326535656139343030 Jan 16 18:01:31.972000 audit: BPF prog-id=152 op=LOAD Jan 16 18:01:31.972000 audit[3217]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=3206 pid=3217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:31.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364653835363564323331613332666634323733326535656139343030 Jan 16 18:01:31.972000 audit: BPF prog-id=152 op=UNLOAD Jan 16 18:01:31.972000 audit[3217]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3206 pid=3217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:31.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364653835363564323331613332666634323733326535656139343030 Jan 16 18:01:31.972000 audit: BPF prog-id=151 op=UNLOAD Jan 16 18:01:31.972000 audit[3217]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3206 pid=3217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:31.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364653835363564323331613332666634323733326535656139343030 Jan 16 18:01:31.972000 audit: BPF prog-id=153 op=LOAD Jan 16 18:01:31.972000 audit[3217]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=3206 pid=3217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:31.972000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6364653835363564323331613332666634323733326535656139343030 Jan 16 18:01:31.984818 kubelet[2796]: I0116 18:01:31.984231 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4dchv\" (UniqueName: \"kubernetes.io/projected/30371a55-b2a2-4dfe-86cf-86b9aadb477e-kube-api-access-4dchv\") pod \"csi-node-driver-lnrx8\" (UID: \"30371a55-b2a2-4dfe-86cf-86b9aadb477e\") " pod="calico-system/csi-node-driver-lnrx8" Jan 16 18:01:31.984818 kubelet[2796]: I0116 18:01:31.984348 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/30371a55-b2a2-4dfe-86cf-86b9aadb477e-registration-dir\") pod \"csi-node-driver-lnrx8\" (UID: \"30371a55-b2a2-4dfe-86cf-86b9aadb477e\") " pod="calico-system/csi-node-driver-lnrx8" Jan 16 18:01:31.984818 kubelet[2796]: I0116 18:01:31.984369 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/30371a55-b2a2-4dfe-86cf-86b9aadb477e-socket-dir\") pod \"csi-node-driver-lnrx8\" (UID: \"30371a55-b2a2-4dfe-86cf-86b9aadb477e\") " pod="calico-system/csi-node-driver-lnrx8" Jan 16 18:01:31.984818 kubelet[2796]: I0116 18:01:31.984401 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/30371a55-b2a2-4dfe-86cf-86b9aadb477e-kubelet-dir\") pod \"csi-node-driver-lnrx8\" (UID: \"30371a55-b2a2-4dfe-86cf-86b9aadb477e\") " pod="calico-system/csi-node-driver-lnrx8" Jan 16 18:01:31.984818 kubelet[2796]: I0116 18:01:31.984417 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/30371a55-b2a2-4dfe-86cf-86b9aadb477e-varrun\") pod \"csi-node-driver-lnrx8\" (UID: \"30371a55-b2a2-4dfe-86cf-86b9aadb477e\") " pod="calico-system/csi-node-driver-lnrx8" Jan 16 18:01:31.987236 kubelet[2796]: E0116 18:01:31.987199 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:31.987525 kubelet[2796]: W0116 18:01:31.987455 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:31.987525 kubelet[2796]: E0116 18:01:31.987493 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:31.989329 kubelet[2796]: E0116 18:01:31.989285 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:31.989329 kubelet[2796]: W0116 18:01:31.989307 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:31.989691 kubelet[2796]: E0116 18:01:31.989448 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:31.990616 kubelet[2796]: E0116 18:01:31.990578 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:31.990616 kubelet[2796]: W0116 18:01:31.990597 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:31.991330 kubelet[2796]: E0116 18:01:31.991305 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:31.992503 kubelet[2796]: E0116 18:01:31.992390 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:31.992811 kubelet[2796]: W0116 18:01:31.992788 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:31.993411 kubelet[2796]: E0116 18:01:31.993348 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:31.994700 kubelet[2796]: E0116 18:01:31.994464 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:31.994700 kubelet[2796]: W0116 18:01:31.994666 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:31.995068 kubelet[2796]: E0116 18:01:31.995021 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:31.995785 kubelet[2796]: E0116 18:01:31.995752 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:31.995785 kubelet[2796]: W0116 18:01:31.995771 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:31.995983 kubelet[2796]: E0116 18:01:31.995797 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:31.996855 kubelet[2796]: E0116 18:01:31.996740 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:31.996855 kubelet[2796]: W0116 18:01:31.996754 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:31.996927 kubelet[2796]: E0116 18:01:31.996906 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:31.996927 kubelet[2796]: W0116 18:01:31.996913 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:31.997150 kubelet[2796]: E0116 18:01:31.997060 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:31.997150 kubelet[2796]: W0116 18:01:31.997074 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:31.997150 kubelet[2796]: E0116 18:01:31.997092 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:31.997150 kubelet[2796]: E0116 18:01:31.997116 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:31.997150 kubelet[2796]: E0116 18:01:31.997126 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:31.997462 kubelet[2796]: E0116 18:01:31.997206 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:31.997462 kubelet[2796]: W0116 18:01:31.997214 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:31.997462 kubelet[2796]: E0116 18:01:31.997242 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:31.997869 kubelet[2796]: E0116 18:01:31.997812 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:31.997869 kubelet[2796]: W0116 18:01:31.997830 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:31.998101 kubelet[2796]: E0116 18:01:31.998083 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:31.999462 kubelet[2796]: E0116 18:01:31.998593 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:31.999462 kubelet[2796]: W0116 18:01:31.999324 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.001128 kubelet[2796]: E0116 18:01:32.001064 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.002250 kubelet[2796]: E0116 18:01:32.001598 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.002250 kubelet[2796]: W0116 18:01:32.001945 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.005031 kubelet[2796]: E0116 18:01:32.004440 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.013833 kubelet[2796]: E0116 18:01:32.009637 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.014222 kubelet[2796]: W0116 18:01:32.014143 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.014809 kubelet[2796]: E0116 18:01:32.014782 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.015499 kubelet[2796]: E0116 18:01:32.015429 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.015603 kubelet[2796]: W0116 18:01:32.015585 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.018076 kubelet[2796]: E0116 18:01:32.016700 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.019582 kubelet[2796]: E0116 18:01:32.019542 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.024513 kubelet[2796]: W0116 18:01:32.024426 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.026576 kubelet[2796]: E0116 18:01:32.026109 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.026779 kubelet[2796]: W0116 18:01:32.026584 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.027507 kubelet[2796]: E0116 18:01:32.027461 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.028889 kubelet[2796]: E0116 18:01:32.027720 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.028889 kubelet[2796]: E0116 18:01:32.027617 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.028889 kubelet[2796]: W0116 18:01:32.027750 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.028889 kubelet[2796]: E0116 18:01:32.028582 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.028889 kubelet[2796]: W0116 18:01:32.028808 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.029930 kubelet[2796]: E0116 18:01:32.029900 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.030261 kubelet[2796]: E0116 18:01:32.030054 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.030422 kubelet[2796]: W0116 18:01:32.030402 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.031540 kubelet[2796]: E0116 18:01:32.031515 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.031769 kubelet[2796]: W0116 18:01:32.031597 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.033168 kubelet[2796]: E0116 18:01:32.032743 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.033168 kubelet[2796]: W0116 18:01:32.032759 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.033824 kubelet[2796]: E0116 18:01:32.033786 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.033824 kubelet[2796]: W0116 18:01:32.033802 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.034543 kubelet[2796]: E0116 18:01:32.034524 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.034745 kubelet[2796]: W0116 18:01:32.034725 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.034983 kubelet[2796]: E0116 18:01:32.034966 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.035257 kubelet[2796]: E0116 18:01:32.034749 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.035708 kubelet[2796]: E0116 18:01:32.034756 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.035708 kubelet[2796]: E0116 18:01:32.034767 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.035936 kubelet[2796]: E0116 18:01:32.034772 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.035936 kubelet[2796]: E0116 18:01:32.034735 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.036182 kubelet[2796]: E0116 18:01:32.036165 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.036332 kubelet[2796]: W0116 18:01:32.036217 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.036332 kubelet[2796]: E0116 18:01:32.036245 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.036777 kubelet[2796]: E0116 18:01:32.036758 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.036927 kubelet[2796]: W0116 18:01:32.036845 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.036927 kubelet[2796]: E0116 18:01:32.036877 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.040668 kubelet[2796]: E0116 18:01:32.040462 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.040668 kubelet[2796]: W0116 18:01:32.040660 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.040943 kubelet[2796]: E0116 18:01:32.040698 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.041939 kubelet[2796]: E0116 18:01:32.041909 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.041939 kubelet[2796]: W0116 18:01:32.041933 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.042065 kubelet[2796]: E0116 18:01:32.041953 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.064799 containerd[1571]: time="2026-01-16T18:01:32.064726623Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-7cf9f575d-5z4qk,Uid:8e2291e9-e15b-400e-b0bd-2becb04ee371,Namespace:calico-system,Attempt:0,} returns sandbox id \"cde8565d231a32ff42732e5ea940085b6584ecf9cebbfb309a97651f84b82215\"" Jan 16 18:01:32.070303 containerd[1571]: time="2026-01-16T18:01:32.070268080Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 16 18:01:32.086850 kubelet[2796]: E0116 18:01:32.086705 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.088312 kubelet[2796]: W0116 18:01:32.087922 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.088312 kubelet[2796]: E0116 18:01:32.087968 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.089752 kubelet[2796]: E0116 18:01:32.089726 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.089903 kubelet[2796]: W0116 18:01:32.089877 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.089977 kubelet[2796]: E0116 18:01:32.089962 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.090459 kubelet[2796]: E0116 18:01:32.090443 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.090548 kubelet[2796]: W0116 18:01:32.090535 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.090753 kubelet[2796]: E0116 18:01:32.090633 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.090911 kubelet[2796]: E0116 18:01:32.090901 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.090973 kubelet[2796]: W0116 18:01:32.090962 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.091131 kubelet[2796]: E0116 18:01:32.091117 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.092890 kubelet[2796]: E0116 18:01:32.091895 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.092890 kubelet[2796]: W0116 18:01:32.091915 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.092890 kubelet[2796]: E0116 18:01:32.091956 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.092890 kubelet[2796]: E0116 18:01:32.092213 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.092890 kubelet[2796]: W0116 18:01:32.092239 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.092890 kubelet[2796]: E0116 18:01:32.092251 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.092890 kubelet[2796]: E0116 18:01:32.092812 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.092890 kubelet[2796]: W0116 18:01:32.092828 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.092890 kubelet[2796]: E0116 18:01:32.092841 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.093219 kubelet[2796]: E0116 18:01:32.093084 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.093219 kubelet[2796]: W0116 18:01:32.093092 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.093219 kubelet[2796]: E0116 18:01:32.093101 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.093644 kubelet[2796]: E0116 18:01:32.093599 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.093644 kubelet[2796]: W0116 18:01:32.093618 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.095347 kubelet[2796]: E0116 18:01:32.093661 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.095347 kubelet[2796]: E0116 18:01:32.094704 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.095347 kubelet[2796]: W0116 18:01:32.094719 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.095347 kubelet[2796]: E0116 18:01:32.094955 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.095347 kubelet[2796]: W0116 18:01:32.094964 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.095347 kubelet[2796]: E0116 18:01:32.095275 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.095347 kubelet[2796]: W0116 18:01:32.095285 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.095347 kubelet[2796]: E0116 18:01:32.095297 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.095527 kubelet[2796]: E0116 18:01:32.095452 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.095527 kubelet[2796]: W0116 18:01:32.095460 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.095527 kubelet[2796]: E0116 18:01:32.095468 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.095792 kubelet[2796]: E0116 18:01:32.095601 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.095792 kubelet[2796]: W0116 18:01:32.095640 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.095792 kubelet[2796]: E0116 18:01:32.095650 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.096123 kubelet[2796]: E0116 18:01:32.095993 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.096343 kubelet[2796]: E0116 18:01:32.096047 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.096826 kubelet[2796]: E0116 18:01:32.096796 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.096826 kubelet[2796]: W0116 18:01:32.096822 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.096983 kubelet[2796]: E0116 18:01:32.096860 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.097271 kubelet[2796]: E0116 18:01:32.097238 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.097271 kubelet[2796]: W0116 18:01:32.097268 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.097486 kubelet[2796]: E0116 18:01:32.097287 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.098910 kubelet[2796]: E0116 18:01:32.098665 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.098910 kubelet[2796]: W0116 18:01:32.098698 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.098910 kubelet[2796]: E0116 18:01:32.098719 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.099928 kubelet[2796]: E0116 18:01:32.099738 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.099928 kubelet[2796]: W0116 18:01:32.099761 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.099928 kubelet[2796]: E0116 18:01:32.099856 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.100271 kubelet[2796]: E0116 18:01:32.100187 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.100271 kubelet[2796]: W0116 18:01:32.100205 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.100821 kubelet[2796]: E0116 18:01:32.100782 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.101218 kubelet[2796]: E0116 18:01:32.100970 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.101218 kubelet[2796]: W0116 18:01:32.101079 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.101218 kubelet[2796]: E0116 18:01:32.101126 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.101423 kubelet[2796]: E0116 18:01:32.101364 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.101423 kubelet[2796]: W0116 18:01:32.101376 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.101423 kubelet[2796]: E0116 18:01:32.101389 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.102054 kubelet[2796]: E0116 18:01:32.101561 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.102054 kubelet[2796]: W0116 18:01:32.101576 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.102054 kubelet[2796]: E0116 18:01:32.101585 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.102317 kubelet[2796]: E0116 18:01:32.102227 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.102317 kubelet[2796]: W0116 18:01:32.102245 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.102317 kubelet[2796]: E0116 18:01:32.102264 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.104265 kubelet[2796]: E0116 18:01:32.104232 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.104265 kubelet[2796]: W0116 18:01:32.104255 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.104371 kubelet[2796]: E0116 18:01:32.104272 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.104664 containerd[1571]: time="2026-01-16T18:01:32.104444011Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v7kmr,Uid:f4b10039-dc30-42c9-8b71-dc0e94181caa,Namespace:calico-system,Attempt:0,}" Jan 16 18:01:32.104803 kubelet[2796]: E0116 18:01:32.104489 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.104803 kubelet[2796]: W0116 18:01:32.104498 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.104803 kubelet[2796]: E0116 18:01:32.104507 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.127702 kubelet[2796]: E0116 18:01:32.127658 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:32.127702 kubelet[2796]: W0116 18:01:32.127687 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:32.127865 kubelet[2796]: E0116 18:01:32.127724 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:32.140614 containerd[1571]: time="2026-01-16T18:01:32.140243164Z" level=info msg="connecting to shim c6e8299438c1ad8519929d34ad69dca9a00ba4da04dd5dc11dc644ef63791a29" address="unix:///run/containerd/s/8906e4f0d2a8a359e8f3442d719d04488a44ca077e4842e3e703dec9b69b0737" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:01:32.177229 systemd[1]: Started cri-containerd-c6e8299438c1ad8519929d34ad69dca9a00ba4da04dd5dc11dc644ef63791a29.scope - libcontainer container c6e8299438c1ad8519929d34ad69dca9a00ba4da04dd5dc11dc644ef63791a29. Jan 16 18:01:32.198000 audit: BPF prog-id=154 op=LOAD Jan 16 18:01:32.199000 audit: BPF prog-id=155 op=LOAD Jan 16 18:01:32.199000 audit[3321]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=3310 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:32.199000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336653832393934333863316164383531393932396433346164363964 Jan 16 18:01:32.200000 audit: BPF prog-id=155 op=UNLOAD Jan 16 18:01:32.200000 audit[3321]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3310 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:32.200000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336653832393934333863316164383531393932396433346164363964 Jan 16 18:01:32.200000 audit: BPF prog-id=156 op=LOAD Jan 16 18:01:32.200000 audit[3321]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3310 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:32.200000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336653832393934333863316164383531393932396433346164363964 Jan 16 18:01:32.200000 audit: BPF prog-id=157 op=LOAD Jan 16 18:01:32.200000 audit[3321]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3310 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:32.200000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336653832393934333863316164383531393932396433346164363964 Jan 16 18:01:32.201000 audit: BPF prog-id=157 op=UNLOAD Jan 16 18:01:32.201000 audit[3321]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3310 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:32.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336653832393934333863316164383531393932396433346164363964 Jan 16 18:01:32.201000 audit: BPF prog-id=156 op=UNLOAD Jan 16 18:01:32.201000 audit[3321]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3310 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:32.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336653832393934333863316164383531393932396433346164363964 Jan 16 18:01:32.201000 audit: BPF prog-id=158 op=LOAD Jan 16 18:01:32.201000 audit[3321]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=3310 pid=3321 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:32.201000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6336653832393934333863316164383531393932396433346164363964 Jan 16 18:01:32.222426 containerd[1571]: time="2026-01-16T18:01:32.222314831Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-v7kmr,Uid:f4b10039-dc30-42c9-8b71-dc0e94181caa,Namespace:calico-system,Attempt:0,} returns sandbox id \"c6e8299438c1ad8519929d34ad69dca9a00ba4da04dd5dc11dc644ef63791a29\"" Jan 16 18:01:32.625000 audit[3348]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=3348 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:32.628482 kernel: kauditd_printk_skb: 64 callbacks suppressed Jan 16 18:01:32.628618 kernel: audit: type=1325 audit(1768586492.625:546): table=filter:119 family=2 entries=22 op=nft_register_rule pid=3348 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:32.625000 audit[3348]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd11f8cc0 a2=0 a3=1 items=0 ppid=2936 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:32.631578 kernel: audit: type=1300 audit(1768586492.625:546): arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd11f8cc0 a2=0 a3=1 items=0 ppid=2936 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:32.625000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:32.633548 kernel: audit: type=1327 audit(1768586492.625:546): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:32.631000 audit[3348]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=3348 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:32.635266 kernel: audit: type=1325 audit(1768586492.631:547): table=nat:120 family=2 entries=12 op=nft_register_rule pid=3348 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:32.635373 kernel: audit: type=1300 audit(1768586492.631:547): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd11f8cc0 a2=0 a3=1 items=0 ppid=2936 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:32.631000 audit[3348]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd11f8cc0 a2=0 a3=1 items=0 ppid=2936 pid=3348 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:32.631000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:32.639068 kernel: audit: type=1327 audit(1768586492.631:547): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:33.495663 kubelet[2796]: E0116 18:01:33.495520 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lnrx8" podUID="30371a55-b2a2-4dfe-86cf-86b9aadb477e" Jan 16 18:01:33.529312 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4168025189.mount: Deactivated successfully. Jan 16 18:01:34.657843 containerd[1571]: time="2026-01-16T18:01:34.657095158Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:01:34.658502 containerd[1571]: time="2026-01-16T18:01:34.658426305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Jan 16 18:01:34.660135 containerd[1571]: time="2026-01-16T18:01:34.660081089Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:01:34.663443 containerd[1571]: time="2026-01-16T18:01:34.663347536Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:01:34.664115 containerd[1571]: time="2026-01-16T18:01:34.664066289Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 2.593307455s" Jan 16 18:01:34.664115 containerd[1571]: time="2026-01-16T18:01:34.664105488Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 16 18:01:34.666103 containerd[1571]: time="2026-01-16T18:01:34.665350236Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 16 18:01:34.698770 containerd[1571]: time="2026-01-16T18:01:34.698695063Z" level=info msg="CreateContainer within sandbox \"cde8565d231a32ff42732e5ea940085b6584ecf9cebbfb309a97651f84b82215\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 16 18:01:34.726162 containerd[1571]: time="2026-01-16T18:01:34.726078429Z" level=info msg="Container 5c06d0db6219c3740e51f04c332c55c214679a5a673e19f567326292f6eaea9a: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:01:34.730694 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2366464479.mount: Deactivated successfully. Jan 16 18:01:34.741298 containerd[1571]: time="2026-01-16T18:01:34.741153718Z" level=info msg="CreateContainer within sandbox \"cde8565d231a32ff42732e5ea940085b6584ecf9cebbfb309a97651f84b82215\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"5c06d0db6219c3740e51f04c332c55c214679a5a673e19f567326292f6eaea9a\"" Jan 16 18:01:34.741909 containerd[1571]: time="2026-01-16T18:01:34.741877471Z" level=info msg="StartContainer for \"5c06d0db6219c3740e51f04c332c55c214679a5a673e19f567326292f6eaea9a\"" Jan 16 18:01:34.745530 containerd[1571]: time="2026-01-16T18:01:34.745462915Z" level=info msg="connecting to shim 5c06d0db6219c3740e51f04c332c55c214679a5a673e19f567326292f6eaea9a" address="unix:///run/containerd/s/331c951073c8e4f44785c97a2c49abbca2d8c30f7a42d5d11dc829fecc92ea81" protocol=ttrpc version=3 Jan 16 18:01:34.771439 systemd[1]: Started cri-containerd-5c06d0db6219c3740e51f04c332c55c214679a5a673e19f567326292f6eaea9a.scope - libcontainer container 5c06d0db6219c3740e51f04c332c55c214679a5a673e19f567326292f6eaea9a. Jan 16 18:01:34.792000 audit: BPF prog-id=159 op=LOAD Jan 16 18:01:34.794776 kernel: audit: type=1334 audit(1768586494.792:548): prog-id=159 op=LOAD Jan 16 18:01:34.794000 audit: BPF prog-id=160 op=LOAD Jan 16 18:01:34.794000 audit[3359]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3206 pid=3359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:34.800055 kernel: audit: type=1334 audit(1768586494.794:549): prog-id=160 op=LOAD Jan 16 18:01:34.800176 kernel: audit: type=1300 audit(1768586494.794:549): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3206 pid=3359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:34.804907 kernel: audit: type=1327 audit(1768586494.794:549): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563303664306462363231396333373430653531663034633333326335 Jan 16 18:01:34.794000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563303664306462363231396333373430653531663034633333326335 Jan 16 18:01:34.795000 audit: BPF prog-id=160 op=UNLOAD Jan 16 18:01:34.795000 audit[3359]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3206 pid=3359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:34.795000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563303664306462363231396333373430653531663034633333326335 Jan 16 18:01:34.799000 audit: BPF prog-id=161 op=LOAD Jan 16 18:01:34.799000 audit[3359]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3206 pid=3359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:34.799000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563303664306462363231396333373430653531663034633333326335 Jan 16 18:01:34.800000 audit: BPF prog-id=162 op=LOAD Jan 16 18:01:34.800000 audit[3359]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3206 pid=3359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:34.800000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563303664306462363231396333373430653531663034633333326335 Jan 16 18:01:34.804000 audit: BPF prog-id=162 op=UNLOAD Jan 16 18:01:34.804000 audit[3359]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3206 pid=3359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:34.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563303664306462363231396333373430653531663034633333326335 Jan 16 18:01:34.804000 audit: BPF prog-id=161 op=UNLOAD Jan 16 18:01:34.804000 audit[3359]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3206 pid=3359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:34.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563303664306462363231396333373430653531663034633333326335 Jan 16 18:01:34.804000 audit: BPF prog-id=163 op=LOAD Jan 16 18:01:34.804000 audit[3359]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3206 pid=3359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:34.804000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3563303664306462363231396333373430653531663034633333326335 Jan 16 18:01:34.842659 containerd[1571]: time="2026-01-16T18:01:34.842586704Z" level=info msg="StartContainer for \"5c06d0db6219c3740e51f04c332c55c214679a5a673e19f567326292f6eaea9a\" returns successfully" Jan 16 18:01:35.496150 kubelet[2796]: E0116 18:01:35.495565 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lnrx8" podUID="30371a55-b2a2-4dfe-86cf-86b9aadb477e" Jan 16 18:01:35.682127 kubelet[2796]: I0116 18:01:35.682044 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-7cf9f575d-5z4qk" podStartSLOduration=2.086179073 podStartE2EDuration="4.681470946s" podCreationTimestamp="2026-01-16 18:01:31 +0000 UTC" firstStartedPulling="2026-01-16 18:01:32.069849845 +0000 UTC m=+27.687355869" lastFinishedPulling="2026-01-16 18:01:34.665141678 +0000 UTC m=+30.282647742" observedRunningTime="2026-01-16 18:01:35.680780392 +0000 UTC m=+31.298286456" watchObservedRunningTime="2026-01-16 18:01:35.681470946 +0000 UTC m=+31.298977010" Jan 16 18:01:35.689878 kubelet[2796]: E0116 18:01:35.689810 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:35.690093 kubelet[2796]: W0116 18:01:35.689849 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:35.690093 kubelet[2796]: E0116 18:01:35.689960 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:35.690358 kubelet[2796]: E0116 18:01:35.690268 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:35.690456 kubelet[2796]: W0116 18:01:35.690296 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:35.690456 kubelet[2796]: E0116 18:01:35.690401 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:35.690737 kubelet[2796]: E0116 18:01:35.690695 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:35.690737 kubelet[2796]: W0116 18:01:35.690719 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:35.690737 kubelet[2796]: E0116 18:01:35.690734 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:35.691003 kubelet[2796]: E0116 18:01:35.690970 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:35.691003 kubelet[2796]: W0116 18:01:35.690989 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:35.691003 kubelet[2796]: E0116 18:01:35.691004 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:35.691217 kubelet[2796]: E0116 18:01:35.691202 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:35.691259 kubelet[2796]: W0116 18:01:35.691218 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:35.691259 kubelet[2796]: E0116 18:01:35.691233 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:35.691451 kubelet[2796]: E0116 18:01:35.691435 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:35.691451 kubelet[2796]: W0116 18:01:35.691451 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:35.691552 kubelet[2796]: E0116 18:01:35.691463 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:35.691678 kubelet[2796]: E0116 18:01:35.691661 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:35.691678 kubelet[2796]: W0116 18:01:35.691677 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:35.691779 kubelet[2796]: E0116 18:01:35.691689 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:35.692152 kubelet[2796]: E0116 18:01:35.692127 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:35.692152 kubelet[2796]: W0116 18:01:35.692151 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:35.692249 kubelet[2796]: E0116 18:01:35.692168 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:35.692455 kubelet[2796]: E0116 18:01:35.692437 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:35.692500 kubelet[2796]: W0116 18:01:35.692457 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:35.692500 kubelet[2796]: E0116 18:01:35.692474 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:35.692823 kubelet[2796]: E0116 18:01:35.692804 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:35.692823 kubelet[2796]: W0116 18:01:35.692821 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:35.692937 kubelet[2796]: E0116 18:01:35.692832 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:35.693040 kubelet[2796]: E0116 18:01:35.693012 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:35.693040 kubelet[2796]: W0116 18:01:35.693028 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:35.693040 kubelet[2796]: E0116 18:01:35.693037 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:35.693173 kubelet[2796]: E0116 18:01:35.693160 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:35.693173 kubelet[2796]: W0116 18:01:35.693172 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:35.693240 kubelet[2796]: E0116 18:01:35.693180 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:35.693317 kubelet[2796]: E0116 18:01:35.693306 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:35.693317 kubelet[2796]: W0116 18:01:35.693316 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:35.693366 kubelet[2796]: E0116 18:01:35.693324 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:35.693448 kubelet[2796]: E0116 18:01:35.693437 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:35.693448 kubelet[2796]: W0116 18:01:35.693447 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:35.693494 kubelet[2796]: E0116 18:01:35.693454 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:35.693568 kubelet[2796]: E0116 18:01:35.693558 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:35.693568 kubelet[2796]: W0116 18:01:35.693568 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:35.693629 kubelet[2796]: E0116 18:01:35.693575 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:35.729492 kubelet[2796]: E0116 18:01:35.729391 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:35.729492 kubelet[2796]: W0116 18:01:35.729433 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:35.729492 kubelet[2796]: E0116 18:01:35.729459 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:35.729959 kubelet[2796]: E0116 18:01:35.729789 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:35.729959 kubelet[2796]: W0116 18:01:35.729804 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:35.729959 kubelet[2796]: E0116 18:01:35.729820 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:35.730222 kubelet[2796]: E0116 18:01:35.730105 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:35.730222 kubelet[2796]: W0116 18:01:35.730116 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:35.730222 kubelet[2796]: E0116 18:01:35.730139 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:35.730434 kubelet[2796]: E0116 18:01:35.730339 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:35.730434 kubelet[2796]: W0116 18:01:35.730409 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:35.730434 kubelet[2796]: E0116 18:01:35.730425 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:35.730598 kubelet[2796]: E0116 18:01:35.730585 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:35.730598 kubelet[2796]: W0116 18:01:35.730597 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:35.730697 kubelet[2796]: E0116 18:01:35.730606 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:35.730747 kubelet[2796]: E0116 18:01:35.730732 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:35.730779 kubelet[2796]: W0116 18:01:35.730746 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:35.730779 kubelet[2796]: E0116 18:01:35.730755 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:35.731073 kubelet[2796]: E0116 18:01:35.730926 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:35.731073 kubelet[2796]: W0116 18:01:35.730948 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:35.731073 kubelet[2796]: E0116 18:01:35.730959 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:35.732062 kubelet[2796]: E0116 18:01:35.732035 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:35.732062 kubelet[2796]: W0116 18:01:35.732058 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:35.732158 kubelet[2796]: E0116 18:01:35.732085 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:35.733753 kubelet[2796]: E0116 18:01:35.733717 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:35.733753 kubelet[2796]: W0116 18:01:35.733740 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:35.733851 kubelet[2796]: E0116 18:01:35.733765 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:35.734061 kubelet[2796]: E0116 18:01:35.734018 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:35.734061 kubelet[2796]: W0116 18:01:35.734037 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:35.734061 kubelet[2796]: E0116 18:01:35.734059 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:35.734269 kubelet[2796]: E0116 18:01:35.734245 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:35.734269 kubelet[2796]: W0116 18:01:35.734266 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:35.734332 kubelet[2796]: E0116 18:01:35.734276 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:35.736298 kubelet[2796]: E0116 18:01:35.736201 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:35.736298 kubelet[2796]: W0116 18:01:35.736231 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:35.736579 kubelet[2796]: E0116 18:01:35.736391 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:35.736579 kubelet[2796]: W0116 18:01:35.736403 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:35.736579 kubelet[2796]: E0116 18:01:35.736566 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:35.736579 kubelet[2796]: W0116 18:01:35.736575 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:35.737833 kubelet[2796]: E0116 18:01:35.736593 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:35.737833 kubelet[2796]: E0116 18:01:35.736765 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:35.737833 kubelet[2796]: W0116 18:01:35.736773 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:35.737833 kubelet[2796]: E0116 18:01:35.736781 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:35.737833 kubelet[2796]: E0116 18:01:35.737067 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:35.737833 kubelet[2796]: E0116 18:01:35.737110 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:35.737833 kubelet[2796]: E0116 18:01:35.737148 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:35.737833 kubelet[2796]: W0116 18:01:35.737159 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:35.737833 kubelet[2796]: E0116 18:01:35.737170 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:35.737833 kubelet[2796]: E0116 18:01:35.737368 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:35.738114 kubelet[2796]: W0116 18:01:35.737379 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:35.738114 kubelet[2796]: E0116 18:01:35.737389 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:35.738114 kubelet[2796]: E0116 18:01:35.737724 2796 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 16 18:01:35.738114 kubelet[2796]: W0116 18:01:35.737736 2796 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 16 18:01:35.738114 kubelet[2796]: E0116 18:01:35.737755 2796 plugins.go:695] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 16 18:01:36.122484 containerd[1571]: time="2026-01-16T18:01:36.121810331Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:01:36.124019 containerd[1571]: time="2026-01-16T18:01:36.123965472Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 16 18:01:36.125349 containerd[1571]: time="2026-01-16T18:01:36.125285660Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:01:36.127746 containerd[1571]: time="2026-01-16T18:01:36.127700639Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:01:36.128550 containerd[1571]: time="2026-01-16T18:01:36.128495032Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.463100836s" Jan 16 18:01:36.128550 containerd[1571]: time="2026-01-16T18:01:36.128550191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 16 18:01:36.133821 containerd[1571]: time="2026-01-16T18:01:36.133762506Z" level=info msg="CreateContainer within sandbox \"c6e8299438c1ad8519929d34ad69dca9a00ba4da04dd5dc11dc644ef63791a29\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 16 18:01:36.146558 containerd[1571]: time="2026-01-16T18:01:36.143463540Z" level=info msg="Container 596f58580ca0e66a3991f5c1d89cce5f1d6b7692b695285e076a6e91c7d36d8b: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:01:36.161459 containerd[1571]: time="2026-01-16T18:01:36.161363943Z" level=info msg="CreateContainer within sandbox \"c6e8299438c1ad8519929d34ad69dca9a00ba4da04dd5dc11dc644ef63791a29\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"596f58580ca0e66a3991f5c1d89cce5f1d6b7692b695285e076a6e91c7d36d8b\"" Jan 16 18:01:36.162439 containerd[1571]: time="2026-01-16T18:01:36.162402694Z" level=info msg="StartContainer for \"596f58580ca0e66a3991f5c1d89cce5f1d6b7692b695285e076a6e91c7d36d8b\"" Jan 16 18:01:36.165657 containerd[1571]: time="2026-01-16T18:01:36.165564466Z" level=info msg="connecting to shim 596f58580ca0e66a3991f5c1d89cce5f1d6b7692b695285e076a6e91c7d36d8b" address="unix:///run/containerd/s/8906e4f0d2a8a359e8f3442d719d04488a44ca077e4842e3e703dec9b69b0737" protocol=ttrpc version=3 Jan 16 18:01:36.201993 systemd[1]: Started cri-containerd-596f58580ca0e66a3991f5c1d89cce5f1d6b7692b695285e076a6e91c7d36d8b.scope - libcontainer container 596f58580ca0e66a3991f5c1d89cce5f1d6b7692b695285e076a6e91c7d36d8b. Jan 16 18:01:36.264000 audit: BPF prog-id=164 op=LOAD Jan 16 18:01:36.264000 audit[3435]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3310 pid=3435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:36.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539366635383538306361306536366133393931663563316438396363 Jan 16 18:01:36.264000 audit: BPF prog-id=165 op=LOAD Jan 16 18:01:36.264000 audit[3435]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3310 pid=3435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:36.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539366635383538306361306536366133393931663563316438396363 Jan 16 18:01:36.264000 audit: BPF prog-id=165 op=UNLOAD Jan 16 18:01:36.264000 audit[3435]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3310 pid=3435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:36.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539366635383538306361306536366133393931663563316438396363 Jan 16 18:01:36.264000 audit: BPF prog-id=164 op=UNLOAD Jan 16 18:01:36.264000 audit[3435]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3310 pid=3435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:36.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539366635383538306361306536366133393931663563316438396363 Jan 16 18:01:36.264000 audit: BPF prog-id=166 op=LOAD Jan 16 18:01:36.264000 audit[3435]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3310 pid=3435 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:36.264000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539366635383538306361306536366133393931663563316438396363 Jan 16 18:01:36.300297 containerd[1571]: time="2026-01-16T18:01:36.300255003Z" level=info msg="StartContainer for \"596f58580ca0e66a3991f5c1d89cce5f1d6b7692b695285e076a6e91c7d36d8b\" returns successfully" Jan 16 18:01:36.320645 systemd[1]: cri-containerd-596f58580ca0e66a3991f5c1d89cce5f1d6b7692b695285e076a6e91c7d36d8b.scope: Deactivated successfully. Jan 16 18:01:36.324000 audit: BPF prog-id=166 op=UNLOAD Jan 16 18:01:36.326542 containerd[1571]: time="2026-01-16T18:01:36.326333774Z" level=info msg="received container exit event container_id:\"596f58580ca0e66a3991f5c1d89cce5f1d6b7692b695285e076a6e91c7d36d8b\" id:\"596f58580ca0e66a3991f5c1d89cce5f1d6b7692b695285e076a6e91c7d36d8b\" pid:3449 exited_at:{seconds:1768586496 nanos:325233744}" Jan 16 18:01:36.354820 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-596f58580ca0e66a3991f5c1d89cce5f1d6b7692b695285e076a6e91c7d36d8b-rootfs.mount: Deactivated successfully. Jan 16 18:01:36.669293 kubelet[2796]: I0116 18:01:36.668362 2796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 16 18:01:36.673918 containerd[1571]: time="2026-01-16T18:01:36.673866961Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 16 18:01:37.495575 kubelet[2796]: E0116 18:01:37.495494 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lnrx8" podUID="30371a55-b2a2-4dfe-86cf-86b9aadb477e" Jan 16 18:01:38.540030 kubelet[2796]: I0116 18:01:38.539974 2796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 16 18:01:38.618000 audit[3490]: NETFILTER_CFG table=filter:121 family=2 entries=21 op=nft_register_rule pid=3490 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:38.621269 kernel: kauditd_printk_skb: 34 callbacks suppressed Jan 16 18:01:38.621517 kernel: audit: type=1325 audit(1768586498.618:562): table=filter:121 family=2 entries=21 op=nft_register_rule pid=3490 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:38.621545 kernel: audit: type=1300 audit(1768586498.618:562): arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff3c3f7f0 a2=0 a3=1 items=0 ppid=2936 pid=3490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:38.618000 audit[3490]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff3c3f7f0 a2=0 a3=1 items=0 ppid=2936 pid=3490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:38.618000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:38.625174 kernel: audit: type=1327 audit(1768586498.618:562): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:38.626000 audit[3490]: NETFILTER_CFG table=nat:122 family=2 entries=19 op=nft_register_chain pid=3490 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:38.626000 audit[3490]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=fffff3c3f7f0 a2=0 a3=1 items=0 ppid=2936 pid=3490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:38.630769 kernel: audit: type=1325 audit(1768586498.626:563): table=nat:122 family=2 entries=19 op=nft_register_chain pid=3490 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:38.630857 kernel: audit: type=1300 audit(1768586498.626:563): arch=c00000b7 syscall=211 success=yes exit=6276 a0=3 a1=fffff3c3f7f0 a2=0 a3=1 items=0 ppid=2936 pid=3490 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:38.626000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:38.634640 kernel: audit: type=1327 audit(1768586498.626:563): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:39.437417 containerd[1571]: time="2026-01-16T18:01:39.436758724Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:01:39.438283 containerd[1571]: time="2026-01-16T18:01:39.438223703Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Jan 16 18:01:39.438711 containerd[1571]: time="2026-01-16T18:01:39.438684654Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:01:39.443444 containerd[1571]: time="2026-01-16T18:01:39.443353370Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:01:39.446766 containerd[1571]: time="2026-01-16T18:01:39.445417989Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.771279309s" Jan 16 18:01:39.446766 containerd[1571]: time="2026-01-16T18:01:39.445476553Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 16 18:01:39.452052 containerd[1571]: time="2026-01-16T18:01:39.451971552Z" level=info msg="CreateContainer within sandbox \"c6e8299438c1ad8519929d34ad69dca9a00ba4da04dd5dc11dc644ef63791a29\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 16 18:01:39.466248 containerd[1571]: time="2026-01-16T18:01:39.466200513Z" level=info msg="Container d6f508b503effd1fbd6735af7b1b367de58086a1cfc46a887f899e388542371c: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:01:39.473153 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2519127660.mount: Deactivated successfully. Jan 16 18:01:39.481203 containerd[1571]: time="2026-01-16T18:01:39.480809780Z" level=info msg="CreateContainer within sandbox \"c6e8299438c1ad8519929d34ad69dca9a00ba4da04dd5dc11dc644ef63791a29\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"d6f508b503effd1fbd6735af7b1b367de58086a1cfc46a887f899e388542371c\"" Jan 16 18:01:39.481804 containerd[1571]: time="2026-01-16T18:01:39.481673639Z" level=info msg="StartContainer for \"d6f508b503effd1fbd6735af7b1b367de58086a1cfc46a887f899e388542371c\"" Jan 16 18:01:39.485513 containerd[1571]: time="2026-01-16T18:01:39.485431133Z" level=info msg="connecting to shim d6f508b503effd1fbd6735af7b1b367de58086a1cfc46a887f899e388542371c" address="unix:///run/containerd/s/8906e4f0d2a8a359e8f3442d719d04488a44ca077e4842e3e703dec9b69b0737" protocol=ttrpc version=3 Jan 16 18:01:39.495468 kubelet[2796]: E0116 18:01:39.495400 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-lnrx8" podUID="30371a55-b2a2-4dfe-86cf-86b9aadb477e" Jan 16 18:01:39.517199 systemd[1]: Started cri-containerd-d6f508b503effd1fbd6735af7b1b367de58086a1cfc46a887f899e388542371c.scope - libcontainer container d6f508b503effd1fbd6735af7b1b367de58086a1cfc46a887f899e388542371c. Jan 16 18:01:39.585000 audit: BPF prog-id=167 op=LOAD Jan 16 18:01:39.585000 audit[3495]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3310 pid=3495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:39.589375 kernel: audit: type=1334 audit(1768586499.585:564): prog-id=167 op=LOAD Jan 16 18:01:39.589472 kernel: audit: type=1300 audit(1768586499.585:564): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3310 pid=3495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:39.589496 kernel: audit: type=1327 audit(1768586499.585:564): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436663530386235303365666664316662643637333561663762316233 Jan 16 18:01:39.585000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436663530386235303365666664316662643637333561663762316233 Jan 16 18:01:39.586000 audit: BPF prog-id=168 op=LOAD Jan 16 18:01:39.586000 audit[3495]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3310 pid=3495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:39.592833 kernel: audit: type=1334 audit(1768586499.586:565): prog-id=168 op=LOAD Jan 16 18:01:39.586000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436663530386235303365666664316662643637333561663762316233 Jan 16 18:01:39.588000 audit: BPF prog-id=168 op=UNLOAD Jan 16 18:01:39.588000 audit[3495]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3310 pid=3495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:39.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436663530386235303365666664316662643637333561663762316233 Jan 16 18:01:39.588000 audit: BPF prog-id=167 op=UNLOAD Jan 16 18:01:39.588000 audit[3495]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3310 pid=3495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:39.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436663530386235303365666664316662643637333561663762316233 Jan 16 18:01:39.588000 audit: BPF prog-id=169 op=LOAD Jan 16 18:01:39.588000 audit[3495]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3310 pid=3495 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:39.588000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6436663530386235303365666664316662643637333561663762316233 Jan 16 18:01:39.619325 containerd[1571]: time="2026-01-16T18:01:39.619284617Z" level=info msg="StartContainer for \"d6f508b503effd1fbd6735af7b1b367de58086a1cfc46a887f899e388542371c\" returns successfully" Jan 16 18:01:40.250971 containerd[1571]: time="2026-01-16T18:01:40.250902546Z" level=error msg="failed to reload cni configuration after receiving fs change event(WRITE \"/etc/cni/net.d/calico-kubeconfig\")" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 16 18:01:40.256512 systemd[1]: cri-containerd-d6f508b503effd1fbd6735af7b1b367de58086a1cfc46a887f899e388542371c.scope: Deactivated successfully. Jan 16 18:01:40.256960 systemd[1]: cri-containerd-d6f508b503effd1fbd6735af7b1b367de58086a1cfc46a887f899e388542371c.scope: Consumed 572ms CPU time, 193.6M memory peak, 165.9M written to disk. Jan 16 18:01:40.260000 audit: BPF prog-id=169 op=UNLOAD Jan 16 18:01:40.261897 containerd[1571]: time="2026-01-16T18:01:40.261792581Z" level=info msg="received container exit event container_id:\"d6f508b503effd1fbd6735af7b1b367de58086a1cfc46a887f899e388542371c\" id:\"d6f508b503effd1fbd6735af7b1b367de58086a1cfc46a887f899e388542371c\" pid:3509 exited_at:{seconds:1768586500 nanos:261121457}" Jan 16 18:01:40.290212 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d6f508b503effd1fbd6735af7b1b367de58086a1cfc46a887f899e388542371c-rootfs.mount: Deactivated successfully. Jan 16 18:01:40.318649 kubelet[2796]: I0116 18:01:40.317669 2796 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 16 18:01:40.368310 systemd[1]: Created slice kubepods-burstable-podb5830553_7e67_4092_9e5b_09dd5a3e6768.slice - libcontainer container kubepods-burstable-podb5830553_7e67_4092_9e5b_09dd5a3e6768.slice. Jan 16 18:01:40.387492 systemd[1]: Created slice kubepods-burstable-pod99950396_08ab_48b3_8770_e8de9a7d70c4.slice - libcontainer container kubepods-burstable-pod99950396_08ab_48b3_8770_e8de9a7d70c4.slice. Jan 16 18:01:40.414315 systemd[1]: Created slice kubepods-besteffort-pod32aa7e73_83e3_4413_b798_e52a9caaa69f.slice - libcontainer container kubepods-besteffort-pod32aa7e73_83e3_4413_b798_e52a9caaa69f.slice. Jan 16 18:01:40.430519 systemd[1]: Created slice kubepods-besteffort-podf2cd278e_9655_4f08_a8bd_cf1ad09f05be.slice - libcontainer container kubepods-besteffort-podf2cd278e_9655_4f08_a8bd_cf1ad09f05be.slice. Jan 16 18:01:40.441750 systemd[1]: Created slice kubepods-besteffort-pod91c39312_4785_4237_a846_6ef23d8c4ee9.slice - libcontainer container kubepods-besteffort-pod91c39312_4785_4237_a846_6ef23d8c4ee9.slice. Jan 16 18:01:40.452294 systemd[1]: Created slice kubepods-besteffort-pod5fef7a00_b60c_4abb_8c75_26be1bbddcd8.slice - libcontainer container kubepods-besteffort-pod5fef7a00_b60c_4abb_8c75_26be1bbddcd8.slice. Jan 16 18:01:40.462178 systemd[1]: Created slice kubepods-besteffort-pod9b676f4f_1816_4b2f_88ec_512b756c1b31.slice - libcontainer container kubepods-besteffort-pod9b676f4f_1816_4b2f_88ec_512b756c1b31.slice. Jan 16 18:01:40.472345 kubelet[2796]: I0116 18:01:40.472294 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fggl9\" (UniqueName: \"kubernetes.io/projected/b5830553-7e67-4092-9e5b-09dd5a3e6768-kube-api-access-fggl9\") pod \"coredns-668d6bf9bc-dq7g7\" (UID: \"b5830553-7e67-4092-9e5b-09dd5a3e6768\") " pod="kube-system/coredns-668d6bf9bc-dq7g7" Jan 16 18:01:40.472345 kubelet[2796]: I0116 18:01:40.472352 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dpcfg\" (UniqueName: \"kubernetes.io/projected/5fef7a00-b60c-4abb-8c75-26be1bbddcd8-kube-api-access-dpcfg\") pod \"calico-kube-controllers-78d49b4987-blz52\" (UID: \"5fef7a00-b60c-4abb-8c75-26be1bbddcd8\") " pod="calico-system/calico-kube-controllers-78d49b4987-blz52" Jan 16 18:01:40.472684 kubelet[2796]: I0116 18:01:40.472377 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9fk2q\" (UniqueName: \"kubernetes.io/projected/91c39312-4785-4237-a846-6ef23d8c4ee9-kube-api-access-9fk2q\") pod \"goldmane-666569f655-5gfbb\" (UID: \"91c39312-4785-4237-a846-6ef23d8c4ee9\") " pod="calico-system/goldmane-666569f655-5gfbb" Jan 16 18:01:40.472684 kubelet[2796]: I0116 18:01:40.472399 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/b5830553-7e67-4092-9e5b-09dd5a3e6768-config-volume\") pod \"coredns-668d6bf9bc-dq7g7\" (UID: \"b5830553-7e67-4092-9e5b-09dd5a3e6768\") " pod="kube-system/coredns-668d6bf9bc-dq7g7" Jan 16 18:01:40.472684 kubelet[2796]: I0116 18:01:40.472419 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tw62r\" (UniqueName: \"kubernetes.io/projected/9b676f4f-1816-4b2f-88ec-512b756c1b31-kube-api-access-tw62r\") pod \"calico-apiserver-66b59648fd-qhvz8\" (UID: \"9b676f4f-1816-4b2f-88ec-512b756c1b31\") " pod="calico-apiserver/calico-apiserver-66b59648fd-qhvz8" Jan 16 18:01:40.472684 kubelet[2796]: I0116 18:01:40.472439 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/99950396-08ab-48b3-8770-e8de9a7d70c4-config-volume\") pod \"coredns-668d6bf9bc-flqcp\" (UID: \"99950396-08ab-48b3-8770-e8de9a7d70c4\") " pod="kube-system/coredns-668d6bf9bc-flqcp" Jan 16 18:01:40.472684 kubelet[2796]: I0116 18:01:40.472462 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f2cd278e-9655-4f08-a8bd-cf1ad09f05be-whisker-backend-key-pair\") pod \"whisker-69fb9f644f-xkdm4\" (UID: \"f2cd278e-9655-4f08-a8bd-cf1ad09f05be\") " pod="calico-system/whisker-69fb9f644f-xkdm4" Jan 16 18:01:40.472863 kubelet[2796]: I0116 18:01:40.472486 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/91c39312-4785-4237-a846-6ef23d8c4ee9-config\") pod \"goldmane-666569f655-5gfbb\" (UID: \"91c39312-4785-4237-a846-6ef23d8c4ee9\") " pod="calico-system/goldmane-666569f655-5gfbb" Jan 16 18:01:40.472863 kubelet[2796]: I0116 18:01:40.472509 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/32aa7e73-83e3-4413-b798-e52a9caaa69f-calico-apiserver-certs\") pod \"calico-apiserver-66b59648fd-pnwsh\" (UID: \"32aa7e73-83e3-4413-b798-e52a9caaa69f\") " pod="calico-apiserver/calico-apiserver-66b59648fd-pnwsh" Jan 16 18:01:40.472863 kubelet[2796]: I0116 18:01:40.472530 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/91c39312-4785-4237-a846-6ef23d8c4ee9-goldmane-ca-bundle\") pod \"goldmane-666569f655-5gfbb\" (UID: \"91c39312-4785-4237-a846-6ef23d8c4ee9\") " pod="calico-system/goldmane-666569f655-5gfbb" Jan 16 18:01:40.472863 kubelet[2796]: I0116 18:01:40.472549 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/91c39312-4785-4237-a846-6ef23d8c4ee9-goldmane-key-pair\") pod \"goldmane-666569f655-5gfbb\" (UID: \"91c39312-4785-4237-a846-6ef23d8c4ee9\") " pod="calico-system/goldmane-666569f655-5gfbb" Jan 16 18:01:40.472863 kubelet[2796]: I0116 18:01:40.472568 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fmzzn\" (UniqueName: \"kubernetes.io/projected/f2cd278e-9655-4f08-a8bd-cf1ad09f05be-kube-api-access-fmzzn\") pod \"whisker-69fb9f644f-xkdm4\" (UID: \"f2cd278e-9655-4f08-a8bd-cf1ad09f05be\") " pod="calico-system/whisker-69fb9f644f-xkdm4" Jan 16 18:01:40.473124 kubelet[2796]: I0116 18:01:40.472587 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t2ks5\" (UniqueName: \"kubernetes.io/projected/32aa7e73-83e3-4413-b798-e52a9caaa69f-kube-api-access-t2ks5\") pod \"calico-apiserver-66b59648fd-pnwsh\" (UID: \"32aa7e73-83e3-4413-b798-e52a9caaa69f\") " pod="calico-apiserver/calico-apiserver-66b59648fd-pnwsh" Jan 16 18:01:40.473124 kubelet[2796]: I0116 18:01:40.472606 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/9b676f4f-1816-4b2f-88ec-512b756c1b31-calico-apiserver-certs\") pod \"calico-apiserver-66b59648fd-qhvz8\" (UID: \"9b676f4f-1816-4b2f-88ec-512b756c1b31\") " pod="calico-apiserver/calico-apiserver-66b59648fd-qhvz8" Jan 16 18:01:40.473124 kubelet[2796]: I0116 18:01:40.472662 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2cd278e-9655-4f08-a8bd-cf1ad09f05be-whisker-ca-bundle\") pod \"whisker-69fb9f644f-xkdm4\" (UID: \"f2cd278e-9655-4f08-a8bd-cf1ad09f05be\") " pod="calico-system/whisker-69fb9f644f-xkdm4" Jan 16 18:01:40.473124 kubelet[2796]: I0116 18:01:40.472686 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/5fef7a00-b60c-4abb-8c75-26be1bbddcd8-tigera-ca-bundle\") pod \"calico-kube-controllers-78d49b4987-blz52\" (UID: \"5fef7a00-b60c-4abb-8c75-26be1bbddcd8\") " pod="calico-system/calico-kube-controllers-78d49b4987-blz52" Jan 16 18:01:40.473124 kubelet[2796]: I0116 18:01:40.472707 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l95xm\" (UniqueName: \"kubernetes.io/projected/99950396-08ab-48b3-8770-e8de9a7d70c4-kube-api-access-l95xm\") pod \"coredns-668d6bf9bc-flqcp\" (UID: \"99950396-08ab-48b3-8770-e8de9a7d70c4\") " pod="kube-system/coredns-668d6bf9bc-flqcp" Jan 16 18:01:40.675755 containerd[1571]: time="2026-01-16T18:01:40.675707454Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dq7g7,Uid:b5830553-7e67-4092-9e5b-09dd5a3e6768,Namespace:kube-system,Attempt:0,}" Jan 16 18:01:40.704425 containerd[1571]: time="2026-01-16T18:01:40.704338535Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 16 18:01:40.706797 containerd[1571]: time="2026-01-16T18:01:40.705796591Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-flqcp,Uid:99950396-08ab-48b3-8770-e8de9a7d70c4,Namespace:kube-system,Attempt:0,}" Jan 16 18:01:40.729301 containerd[1571]: time="2026-01-16T18:01:40.729256892Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b59648fd-pnwsh,Uid:32aa7e73-83e3-4413-b798-e52a9caaa69f,Namespace:calico-apiserver,Attempt:0,}" Jan 16 18:01:40.741293 containerd[1571]: time="2026-01-16T18:01:40.741249040Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69fb9f644f-xkdm4,Uid:f2cd278e-9655-4f08-a8bd-cf1ad09f05be,Namespace:calico-system,Attempt:0,}" Jan 16 18:01:40.747684 containerd[1571]: time="2026-01-16T18:01:40.747636540Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5gfbb,Uid:91c39312-4785-4237-a846-6ef23d8c4ee9,Namespace:calico-system,Attempt:0,}" Jan 16 18:01:40.758988 containerd[1571]: time="2026-01-16T18:01:40.758822155Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78d49b4987-blz52,Uid:5fef7a00-b60c-4abb-8c75-26be1bbddcd8,Namespace:calico-system,Attempt:0,}" Jan 16 18:01:40.773095 containerd[1571]: time="2026-01-16T18:01:40.772942722Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b59648fd-qhvz8,Uid:9b676f4f-1816-4b2f-88ec-512b756c1b31,Namespace:calico-apiserver,Attempt:0,}" Jan 16 18:01:40.950323 containerd[1571]: time="2026-01-16T18:01:40.950095681Z" level=error msg="Failed to destroy network for sandbox \"86354c7c60b07f8e9c3658e4dcf7b54840ce094596e6fde15cc1feb6e0163914\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:01:40.957652 containerd[1571]: time="2026-01-16T18:01:40.956946931Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dq7g7,Uid:b5830553-7e67-4092-9e5b-09dd5a3e6768,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"86354c7c60b07f8e9c3658e4dcf7b54840ce094596e6fde15cc1feb6e0163914\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:01:40.957823 kubelet[2796]: E0116 18:01:40.957241 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86354c7c60b07f8e9c3658e4dcf7b54840ce094596e6fde15cc1feb6e0163914\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:01:40.957823 kubelet[2796]: E0116 18:01:40.957327 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86354c7c60b07f8e9c3658e4dcf7b54840ce094596e6fde15cc1feb6e0163914\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dq7g7" Jan 16 18:01:40.957823 kubelet[2796]: E0116 18:01:40.957348 2796 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"86354c7c60b07f8e9c3658e4dcf7b54840ce094596e6fde15cc1feb6e0163914\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-dq7g7" Jan 16 18:01:40.957941 kubelet[2796]: E0116 18:01:40.957402 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-dq7g7_kube-system(b5830553-7e67-4092-9e5b-09dd5a3e6768)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-dq7g7_kube-system(b5830553-7e67-4092-9e5b-09dd5a3e6768)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"86354c7c60b07f8e9c3658e4dcf7b54840ce094596e6fde15cc1feb6e0163914\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-dq7g7" podUID="b5830553-7e67-4092-9e5b-09dd5a3e6768" Jan 16 18:01:40.965390 containerd[1571]: time="2026-01-16T18:01:40.965324081Z" level=error msg="Failed to destroy network for sandbox \"78c8cf75105862d3b36066c74022626e074e09880c00865b5b74eaa0ce1a1d99\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:01:40.972978 containerd[1571]: time="2026-01-16T18:01:40.971205708Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-69fb9f644f-xkdm4,Uid:f2cd278e-9655-4f08-a8bd-cf1ad09f05be,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"78c8cf75105862d3b36066c74022626e074e09880c00865b5b74eaa0ce1a1d99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:01:40.973181 kubelet[2796]: E0116 18:01:40.971468 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78c8cf75105862d3b36066c74022626e074e09880c00865b5b74eaa0ce1a1d99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:01:40.973181 kubelet[2796]: E0116 18:01:40.971526 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78c8cf75105862d3b36066c74022626e074e09880c00865b5b74eaa0ce1a1d99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69fb9f644f-xkdm4" Jan 16 18:01:40.973181 kubelet[2796]: E0116 18:01:40.971545 2796 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"78c8cf75105862d3b36066c74022626e074e09880c00865b5b74eaa0ce1a1d99\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-69fb9f644f-xkdm4" Jan 16 18:01:40.973279 kubelet[2796]: E0116 18:01:40.971614 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-69fb9f644f-xkdm4_calico-system(f2cd278e-9655-4f08-a8bd-cf1ad09f05be)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-69fb9f644f-xkdm4_calico-system(f2cd278e-9655-4f08-a8bd-cf1ad09f05be)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"78c8cf75105862d3b36066c74022626e074e09880c00865b5b74eaa0ce1a1d99\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-69fb9f644f-xkdm4" podUID="f2cd278e-9655-4f08-a8bd-cf1ad09f05be" Jan 16 18:01:40.974522 containerd[1571]: time="2026-01-16T18:01:40.974387757Z" level=error msg="Failed to destroy network for sandbox \"6412909fd13dd87da1cb8a05a7e22c6071cf51461540856aaf7e412562af1658\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:01:40.979467 containerd[1571]: time="2026-01-16T18:01:40.979354603Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-flqcp,Uid:99950396-08ab-48b3-8770-e8de9a7d70c4,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"6412909fd13dd87da1cb8a05a7e22c6071cf51461540856aaf7e412562af1658\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:01:40.980324 kubelet[2796]: E0116 18:01:40.979613 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6412909fd13dd87da1cb8a05a7e22c6071cf51461540856aaf7e412562af1658\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:01:40.980324 kubelet[2796]: E0116 18:01:40.979704 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6412909fd13dd87da1cb8a05a7e22c6071cf51461540856aaf7e412562af1658\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-flqcp" Jan 16 18:01:40.980324 kubelet[2796]: E0116 18:01:40.979727 2796 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"6412909fd13dd87da1cb8a05a7e22c6071cf51461540856aaf7e412562af1658\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-668d6bf9bc-flqcp" Jan 16 18:01:40.980442 kubelet[2796]: E0116 18:01:40.979770 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-668d6bf9bc-flqcp_kube-system(99950396-08ab-48b3-8770-e8de9a7d70c4)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-668d6bf9bc-flqcp_kube-system(99950396-08ab-48b3-8770-e8de9a7d70c4)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"6412909fd13dd87da1cb8a05a7e22c6071cf51461540856aaf7e412562af1658\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-668d6bf9bc-flqcp" podUID="99950396-08ab-48b3-8770-e8de9a7d70c4" Jan 16 18:01:40.983874 containerd[1571]: time="2026-01-16T18:01:40.983711529Z" level=error msg="Failed to destroy network for sandbox \"1ac05d3c641b881c2bd4af5951193b8463fd80d55a266c8243056a86341a565c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:01:40.987575 containerd[1571]: time="2026-01-16T18:01:40.987523140Z" level=error msg="Failed to destroy network for sandbox \"7bd525d09424b136bbdfc4f42fcdd8a661d3cf5093ae8374ce109129f14d5dd0\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:01:40.988097 containerd[1571]: time="2026-01-16T18:01:40.988058855Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b59648fd-pnwsh,Uid:32aa7e73-83e3-4413-b798-e52a9caaa69f,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ac05d3c641b881c2bd4af5951193b8463fd80d55a266c8243056a86341a565c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:01:40.988894 containerd[1571]: time="2026-01-16T18:01:40.988866708Z" level=error msg="Failed to destroy network for sandbox \"7fa721614280120dc2c88b8a9bb4c0581c96dbba7fb9785848776e5b4bb9c5b1\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:01:40.989113 kubelet[2796]: E0116 18:01:40.989021 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ac05d3c641b881c2bd4af5951193b8463fd80d55a266c8243056a86341a565c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:01:40.989226 kubelet[2796]: E0116 18:01:40.989142 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ac05d3c641b881c2bd4af5951193b8463fd80d55a266c8243056a86341a565c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66b59648fd-pnwsh" Jan 16 18:01:40.989226 kubelet[2796]: E0116 18:01:40.989163 2796 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1ac05d3c641b881c2bd4af5951193b8463fd80d55a266c8243056a86341a565c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66b59648fd-pnwsh" Jan 16 18:01:40.989226 kubelet[2796]: E0116 18:01:40.989210 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66b59648fd-pnwsh_calico-apiserver(32aa7e73-83e3-4413-b798-e52a9caaa69f)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66b59648fd-pnwsh_calico-apiserver(32aa7e73-83e3-4413-b798-e52a9caaa69f)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1ac05d3c641b881c2bd4af5951193b8463fd80d55a266c8243056a86341a565c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66b59648fd-pnwsh" podUID="32aa7e73-83e3-4413-b798-e52a9caaa69f" Jan 16 18:01:40.991515 containerd[1571]: time="2026-01-16T18:01:40.991365472Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78d49b4987-blz52,Uid:5fef7a00-b60c-4abb-8c75-26be1bbddcd8,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bd525d09424b136bbdfc4f42fcdd8a661d3cf5093ae8374ce109129f14d5dd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:01:40.992639 kubelet[2796]: E0116 18:01:40.992557 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bd525d09424b136bbdfc4f42fcdd8a661d3cf5093ae8374ce109129f14d5dd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:01:40.992761 kubelet[2796]: E0116 18:01:40.992653 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bd525d09424b136bbdfc4f42fcdd8a661d3cf5093ae8374ce109129f14d5dd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78d49b4987-blz52" Jan 16 18:01:40.992761 kubelet[2796]: E0116 18:01:40.992675 2796 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7bd525d09424b136bbdfc4f42fcdd8a661d3cf5093ae8374ce109129f14d5dd0\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-78d49b4987-blz52" Jan 16 18:01:40.992761 kubelet[2796]: E0116 18:01:40.992722 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-78d49b4987-blz52_calico-system(5fef7a00-b60c-4abb-8c75-26be1bbddcd8)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-78d49b4987-blz52_calico-system(5fef7a00-b60c-4abb-8c75-26be1bbddcd8)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7bd525d09424b136bbdfc4f42fcdd8a661d3cf5093ae8374ce109129f14d5dd0\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-78d49b4987-blz52" podUID="5fef7a00-b60c-4abb-8c75-26be1bbddcd8" Jan 16 18:01:40.994826 containerd[1571]: time="2026-01-16T18:01:40.994705452Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5gfbb,Uid:91c39312-4785-4237-a846-6ef23d8c4ee9,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fa721614280120dc2c88b8a9bb4c0581c96dbba7fb9785848776e5b4bb9c5b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:01:40.995152 kubelet[2796]: E0116 18:01:40.995116 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fa721614280120dc2c88b8a9bb4c0581c96dbba7fb9785848776e5b4bb9c5b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:01:40.995309 kubelet[2796]: E0116 18:01:40.995270 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fa721614280120dc2c88b8a9bb4c0581c96dbba7fb9785848776e5b4bb9c5b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5gfbb" Jan 16 18:01:40.995522 kubelet[2796]: E0116 18:01:40.995430 2796 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7fa721614280120dc2c88b8a9bb4c0581c96dbba7fb9785848776e5b4bb9c5b1\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-5gfbb" Jan 16 18:01:40.995653 kubelet[2796]: E0116 18:01:40.995588 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-5gfbb_calico-system(91c39312-4785-4237-a846-6ef23d8c4ee9)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-5gfbb_calico-system(91c39312-4785-4237-a846-6ef23d8c4ee9)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7fa721614280120dc2c88b8a9bb4c0581c96dbba7fb9785848776e5b4bb9c5b1\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-5gfbb" podUID="91c39312-4785-4237-a846-6ef23d8c4ee9" Jan 16 18:01:41.011231 containerd[1571]: time="2026-01-16T18:01:41.011019506Z" level=error msg="Failed to destroy network for sandbox \"43c09330cf9f0c4eb9ab1b855ff938c4e35e817d10d15806f41823839d272b3f\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:01:41.015383 containerd[1571]: time="2026-01-16T18:01:41.015308740Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b59648fd-qhvz8,Uid:9b676f4f-1816-4b2f-88ec-512b756c1b31,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"43c09330cf9f0c4eb9ab1b855ff938c4e35e817d10d15806f41823839d272b3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:01:41.016603 kubelet[2796]: E0116 18:01:41.016490 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43c09330cf9f0c4eb9ab1b855ff938c4e35e817d10d15806f41823839d272b3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:01:41.016820 kubelet[2796]: E0116 18:01:41.016585 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43c09330cf9f0c4eb9ab1b855ff938c4e35e817d10d15806f41823839d272b3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66b59648fd-qhvz8" Jan 16 18:01:41.016820 kubelet[2796]: E0116 18:01:41.016773 2796 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"43c09330cf9f0c4eb9ab1b855ff938c4e35e817d10d15806f41823839d272b3f\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-66b59648fd-qhvz8" Jan 16 18:01:41.017078 kubelet[2796]: E0116 18:01:41.017001 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-66b59648fd-qhvz8_calico-apiserver(9b676f4f-1816-4b2f-88ec-512b756c1b31)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-66b59648fd-qhvz8_calico-apiserver(9b676f4f-1816-4b2f-88ec-512b756c1b31)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"43c09330cf9f0c4eb9ab1b855ff938c4e35e817d10d15806f41823839d272b3f\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-66b59648fd-qhvz8" podUID="9b676f4f-1816-4b2f-88ec-512b756c1b31" Jan 16 18:01:41.507454 systemd[1]: Created slice kubepods-besteffort-pod30371a55_b2a2_4dfe_86cf_86b9aadb477e.slice - libcontainer container kubepods-besteffort-pod30371a55_b2a2_4dfe_86cf_86b9aadb477e.slice. Jan 16 18:01:41.511748 containerd[1571]: time="2026-01-16T18:01:41.511708573Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lnrx8,Uid:30371a55-b2a2-4dfe-86cf-86b9aadb477e,Namespace:calico-system,Attempt:0,}" Jan 16 18:01:41.575704 containerd[1571]: time="2026-01-16T18:01:41.575647858Z" level=error msg="Failed to destroy network for sandbox \"7467f0451f00ea4ed82470348ccb845305ca2d6bf3f9bc1da246d129bcd2f0cc\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:01:41.579904 containerd[1571]: time="2026-01-16T18:01:41.579551068Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lnrx8,Uid:30371a55-b2a2-4dfe-86cf-86b9aadb477e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"7467f0451f00ea4ed82470348ccb845305ca2d6bf3f9bc1da246d129bcd2f0cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:01:41.580933 kubelet[2796]: E0116 18:01:41.580836 2796 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7467f0451f00ea4ed82470348ccb845305ca2d6bf3f9bc1da246d129bcd2f0cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 16 18:01:41.580933 kubelet[2796]: E0116 18:01:41.580915 2796 kuberuntime_sandbox.go:72] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7467f0451f00ea4ed82470348ccb845305ca2d6bf3f9bc1da246d129bcd2f0cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lnrx8" Jan 16 18:01:41.581977 kubelet[2796]: E0116 18:01:41.580942 2796 kuberuntime_manager.go:1237] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"7467f0451f00ea4ed82470348ccb845305ca2d6bf3f9bc1da246d129bcd2f0cc\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-lnrx8" Jan 16 18:01:41.581977 kubelet[2796]: E0116 18:01:41.580991 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-lnrx8_calico-system(30371a55-b2a2-4dfe-86cf-86b9aadb477e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-lnrx8_calico-system(30371a55-b2a2-4dfe-86cf-86b9aadb477e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"7467f0451f00ea4ed82470348ccb845305ca2d6bf3f9bc1da246d129bcd2f0cc\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-lnrx8" podUID="30371a55-b2a2-4dfe-86cf-86b9aadb477e" Jan 16 18:01:45.467960 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1572545690.mount: Deactivated successfully. Jan 16 18:01:45.499542 containerd[1571]: time="2026-01-16T18:01:45.498605679Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:01:45.500142 containerd[1571]: time="2026-01-16T18:01:45.500064962Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Jan 16 18:01:45.500986 containerd[1571]: time="2026-01-16T18:01:45.500956253Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:01:45.503656 containerd[1571]: time="2026-01-16T18:01:45.503601284Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 16 18:01:45.504278 containerd[1571]: time="2026-01-16T18:01:45.504207319Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 4.798518015s" Jan 16 18:01:45.504278 containerd[1571]: time="2026-01-16T18:01:45.504249801Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 16 18:01:45.524551 containerd[1571]: time="2026-01-16T18:01:45.524506960Z" level=info msg="CreateContainer within sandbox \"c6e8299438c1ad8519929d34ad69dca9a00ba4da04dd5dc11dc644ef63791a29\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 16 18:01:45.541123 containerd[1571]: time="2026-01-16T18:01:45.541075827Z" level=info msg="Container a69725c727e8083eda24b9fbec90de7ff5df5e56a97a35d5509be8c60ff4ad77: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:01:45.548407 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount113598404.mount: Deactivated successfully. Jan 16 18:01:45.557656 containerd[1571]: time="2026-01-16T18:01:45.557582251Z" level=info msg="CreateContainer within sandbox \"c6e8299438c1ad8519929d34ad69dca9a00ba4da04dd5dc11dc644ef63791a29\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"a69725c727e8083eda24b9fbec90de7ff5df5e56a97a35d5509be8c60ff4ad77\"" Jan 16 18:01:45.561144 containerd[1571]: time="2026-01-16T18:01:45.561080051Z" level=info msg="StartContainer for \"a69725c727e8083eda24b9fbec90de7ff5df5e56a97a35d5509be8c60ff4ad77\"" Jan 16 18:01:45.566094 containerd[1571]: time="2026-01-16T18:01:45.566011213Z" level=info msg="connecting to shim a69725c727e8083eda24b9fbec90de7ff5df5e56a97a35d5509be8c60ff4ad77" address="unix:///run/containerd/s/8906e4f0d2a8a359e8f3442d719d04488a44ca077e4842e3e703dec9b69b0737" protocol=ttrpc version=3 Jan 16 18:01:45.592970 systemd[1]: Started cri-containerd-a69725c727e8083eda24b9fbec90de7ff5df5e56a97a35d5509be8c60ff4ad77.scope - libcontainer container a69725c727e8083eda24b9fbec90de7ff5df5e56a97a35d5509be8c60ff4ad77. Jan 16 18:01:45.690052 kernel: kauditd_printk_skb: 12 callbacks suppressed Jan 16 18:01:45.690203 kernel: audit: type=1334 audit(1768586505.685:570): prog-id=170 op=LOAD Jan 16 18:01:45.685000 audit: BPF prog-id=170 op=LOAD Jan 16 18:01:45.685000 audit[3765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=3310 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:45.694045 kernel: audit: type=1300 audit(1768586505.685:570): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=3310 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:45.694138 kernel: audit: type=1327 audit(1768586505.685:570): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136393732356337323765383038336564613234623966626563393064 Jan 16 18:01:45.685000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136393732356337323765383038336564613234623966626563393064 Jan 16 18:01:45.697349 kernel: audit: type=1334 audit(1768586505.688:571): prog-id=171 op=LOAD Jan 16 18:01:45.688000 audit: BPF prog-id=171 op=LOAD Jan 16 18:01:45.688000 audit[3765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=3310 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:45.700782 kernel: audit: type=1300 audit(1768586505.688:571): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=3310 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:45.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136393732356337323765383038336564613234623966626563393064 Jan 16 18:01:45.707562 kernel: audit: type=1327 audit(1768586505.688:571): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136393732356337323765383038336564613234623966626563393064 Jan 16 18:01:45.688000 audit: BPF prog-id=171 op=UNLOAD Jan 16 18:01:45.688000 audit[3765]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3310 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:45.710818 kernel: audit: type=1334 audit(1768586505.688:572): prog-id=171 op=UNLOAD Jan 16 18:01:45.710863 kernel: audit: type=1300 audit(1768586505.688:572): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3310 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:45.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136393732356337323765383038336564613234623966626563393064 Jan 16 18:01:45.714413 kernel: audit: type=1327 audit(1768586505.688:572): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136393732356337323765383038336564613234623966626563393064 Jan 16 18:01:45.688000 audit: BPF prog-id=170 op=UNLOAD Jan 16 18:01:45.715719 kernel: audit: type=1334 audit(1768586505.688:573): prog-id=170 op=UNLOAD Jan 16 18:01:45.688000 audit[3765]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3310 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:45.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136393732356337323765383038336564613234623966626563393064 Jan 16 18:01:45.688000 audit: BPF prog-id=172 op=LOAD Jan 16 18:01:45.688000 audit[3765]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=3310 pid=3765 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:45.688000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6136393732356337323765383038336564613234623966626563393064 Jan 16 18:01:45.742497 containerd[1571]: time="2026-01-16T18:01:45.740350143Z" level=info msg="StartContainer for \"a69725c727e8083eda24b9fbec90de7ff5df5e56a97a35d5509be8c60ff4ad77\" returns successfully" Jan 16 18:01:45.902006 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 16 18:01:45.902694 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 16 18:01:46.120526 kubelet[2796]: I0116 18:01:46.120483 2796 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2cd278e-9655-4f08-a8bd-cf1ad09f05be-whisker-ca-bundle\") pod \"f2cd278e-9655-4f08-a8bd-cf1ad09f05be\" (UID: \"f2cd278e-9655-4f08-a8bd-cf1ad09f05be\") " Jan 16 18:01:46.121013 kubelet[2796]: I0116 18:01:46.120545 2796 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f2cd278e-9655-4f08-a8bd-cf1ad09f05be-whisker-backend-key-pair\") pod \"f2cd278e-9655-4f08-a8bd-cf1ad09f05be\" (UID: \"f2cd278e-9655-4f08-a8bd-cf1ad09f05be\") " Jan 16 18:01:46.121013 kubelet[2796]: I0116 18:01:46.120571 2796 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fmzzn\" (UniqueName: \"kubernetes.io/projected/f2cd278e-9655-4f08-a8bd-cf1ad09f05be-kube-api-access-fmzzn\") pod \"f2cd278e-9655-4f08-a8bd-cf1ad09f05be\" (UID: \"f2cd278e-9655-4f08-a8bd-cf1ad09f05be\") " Jan 16 18:01:46.123007 kubelet[2796]: I0116 18:01:46.122613 2796 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2cd278e-9655-4f08-a8bd-cf1ad09f05be-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "f2cd278e-9655-4f08-a8bd-cf1ad09f05be" (UID: "f2cd278e-9655-4f08-a8bd-cf1ad09f05be"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 16 18:01:46.140051 kubelet[2796]: I0116 18:01:46.139863 2796 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2cd278e-9655-4f08-a8bd-cf1ad09f05be-kube-api-access-fmzzn" (OuterVolumeSpecName: "kube-api-access-fmzzn") pod "f2cd278e-9655-4f08-a8bd-cf1ad09f05be" (UID: "f2cd278e-9655-4f08-a8bd-cf1ad09f05be"). InnerVolumeSpecName "kube-api-access-fmzzn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 16 18:01:46.141212 kubelet[2796]: I0116 18:01:46.141160 2796 operation_generator.go:780] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2cd278e-9655-4f08-a8bd-cf1ad09f05be-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "f2cd278e-9655-4f08-a8bd-cf1ad09f05be" (UID: "f2cd278e-9655-4f08-a8bd-cf1ad09f05be"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 16 18:01:46.221643 kubelet[2796]: I0116 18:01:46.221232 2796 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/f2cd278e-9655-4f08-a8bd-cf1ad09f05be-whisker-backend-key-pair\") on node \"ci-4580-0-0-p-e4bb445d88\" DevicePath \"\"" Jan 16 18:01:46.221643 kubelet[2796]: I0116 18:01:46.221279 2796 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fmzzn\" (UniqueName: \"kubernetes.io/projected/f2cd278e-9655-4f08-a8bd-cf1ad09f05be-kube-api-access-fmzzn\") on node \"ci-4580-0-0-p-e4bb445d88\" DevicePath \"\"" Jan 16 18:01:46.221643 kubelet[2796]: I0116 18:01:46.221305 2796 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f2cd278e-9655-4f08-a8bd-cf1ad09f05be-whisker-ca-bundle\") on node \"ci-4580-0-0-p-e4bb445d88\" DevicePath \"\"" Jan 16 18:01:46.468849 systemd[1]: var-lib-kubelet-pods-f2cd278e\x2d9655\x2d4f08\x2da8bd\x2dcf1ad09f05be-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2dfmzzn.mount: Deactivated successfully. Jan 16 18:01:46.468979 systemd[1]: var-lib-kubelet-pods-f2cd278e\x2d9655\x2d4f08\x2da8bd\x2dcf1ad09f05be-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 16 18:01:46.510117 systemd[1]: Removed slice kubepods-besteffort-podf2cd278e_9655_4f08_a8bd_cf1ad09f05be.slice - libcontainer container kubepods-besteffort-podf2cd278e_9655_4f08_a8bd_cf1ad09f05be.slice. Jan 16 18:01:46.771214 kubelet[2796]: I0116 18:01:46.770973 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-v7kmr" podStartSLOduration=2.489618797 podStartE2EDuration="15.770938492s" podCreationTimestamp="2026-01-16 18:01:31 +0000 UTC" firstStartedPulling="2026-01-16 18:01:32.224058331 +0000 UTC m=+27.841564395" lastFinishedPulling="2026-01-16 18:01:45.505378026 +0000 UTC m=+41.122884090" observedRunningTime="2026-01-16 18:01:46.768841935 +0000 UTC m=+42.386347999" watchObservedRunningTime="2026-01-16 18:01:46.770938492 +0000 UTC m=+42.388444556" Jan 16 18:01:46.855957 systemd[1]: Created slice kubepods-besteffort-pod53e714ba_c9e3_42df_ae04_537a6b215f2f.slice - libcontainer container kubepods-besteffort-pod53e714ba_c9e3_42df_ae04_537a6b215f2f.slice. Jan 16 18:01:46.927810 kubelet[2796]: I0116 18:01:46.927748 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ldhg8\" (UniqueName: \"kubernetes.io/projected/53e714ba-c9e3-42df-ae04-537a6b215f2f-kube-api-access-ldhg8\") pod \"whisker-76db8c587c-rvj9m\" (UID: \"53e714ba-c9e3-42df-ae04-537a6b215f2f\") " pod="calico-system/whisker-76db8c587c-rvj9m" Jan 16 18:01:46.927810 kubelet[2796]: I0116 18:01:46.927806 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/53e714ba-c9e3-42df-ae04-537a6b215f2f-whisker-backend-key-pair\") pod \"whisker-76db8c587c-rvj9m\" (UID: \"53e714ba-c9e3-42df-ae04-537a6b215f2f\") " pod="calico-system/whisker-76db8c587c-rvj9m" Jan 16 18:01:46.928022 kubelet[2796]: I0116 18:01:46.927828 2796 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/53e714ba-c9e3-42df-ae04-537a6b215f2f-whisker-ca-bundle\") pod \"whisker-76db8c587c-rvj9m\" (UID: \"53e714ba-c9e3-42df-ae04-537a6b215f2f\") " pod="calico-system/whisker-76db8c587c-rvj9m" Jan 16 18:01:47.163151 containerd[1571]: time="2026-01-16T18:01:47.162842016Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76db8c587c-rvj9m,Uid:53e714ba-c9e3-42df-ae04-537a6b215f2f,Namespace:calico-system,Attempt:0,}" Jan 16 18:01:47.386846 systemd-networkd[1471]: cali35b8b7d4c28: Link UP Jan 16 18:01:47.387081 systemd-networkd[1471]: cali35b8b7d4c28: Gained carrier Jan 16 18:01:47.416868 containerd[1571]: 2026-01-16 18:01:47.193 [INFO][3829] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 16 18:01:47.416868 containerd[1571]: 2026-01-16 18:01:47.253 [INFO][3829] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4580--0--0--p--e4bb445d88-k8s-whisker--76db8c587c--rvj9m-eth0 whisker-76db8c587c- calico-system 53e714ba-c9e3-42df-ae04-537a6b215f2f 906 0 2026-01-16 18:01:46 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:76db8c587c projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4580-0-0-p-e4bb445d88 whisker-76db8c587c-rvj9m eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali35b8b7d4c28 [] [] }} ContainerID="c4b30dc48c56ad2eee0831580d3ab1aa6365e09f7b7e203ab90bec761d19ac46" Namespace="calico-system" Pod="whisker-76db8c587c-rvj9m" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-whisker--76db8c587c--rvj9m-" Jan 16 18:01:47.416868 containerd[1571]: 2026-01-16 18:01:47.253 [INFO][3829] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c4b30dc48c56ad2eee0831580d3ab1aa6365e09f7b7e203ab90bec761d19ac46" Namespace="calico-system" Pod="whisker-76db8c587c-rvj9m" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-whisker--76db8c587c--rvj9m-eth0" Jan 16 18:01:47.416868 containerd[1571]: 2026-01-16 18:01:47.311 [INFO][3841] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c4b30dc48c56ad2eee0831580d3ab1aa6365e09f7b7e203ab90bec761d19ac46" HandleID="k8s-pod-network.c4b30dc48c56ad2eee0831580d3ab1aa6365e09f7b7e203ab90bec761d19ac46" Workload="ci--4580--0--0--p--e4bb445d88-k8s-whisker--76db8c587c--rvj9m-eth0" Jan 16 18:01:47.417114 containerd[1571]: 2026-01-16 18:01:47.311 [INFO][3841] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c4b30dc48c56ad2eee0831580d3ab1aa6365e09f7b7e203ab90bec761d19ac46" HandleID="k8s-pod-network.c4b30dc48c56ad2eee0831580d3ab1aa6365e09f7b7e203ab90bec761d19ac46" Workload="ci--4580--0--0--p--e4bb445d88-k8s-whisker--76db8c587c--rvj9m-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002316e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4580-0-0-p-e4bb445d88", "pod":"whisker-76db8c587c-rvj9m", "timestamp":"2026-01-16 18:01:47.311714157 +0000 UTC"}, Hostname:"ci-4580-0-0-p-e4bb445d88", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 18:01:47.417114 containerd[1571]: 2026-01-16 18:01:47.311 [INFO][3841] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 18:01:47.417114 containerd[1571]: 2026-01-16 18:01:47.312 [INFO][3841] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 18:01:47.417114 containerd[1571]: 2026-01-16 18:01:47.312 [INFO][3841] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4580-0-0-p-e4bb445d88' Jan 16 18:01:47.417114 containerd[1571]: 2026-01-16 18:01:47.325 [INFO][3841] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c4b30dc48c56ad2eee0831580d3ab1aa6365e09f7b7e203ab90bec761d19ac46" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:47.417114 containerd[1571]: 2026-01-16 18:01:47.333 [INFO][3841] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:47.417114 containerd[1571]: 2026-01-16 18:01:47.346 [INFO][3841] ipam/ipam.go 511: Trying affinity for 192.168.122.0/26 host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:47.417114 containerd[1571]: 2026-01-16 18:01:47.349 [INFO][3841] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.0/26 host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:47.417114 containerd[1571]: 2026-01-16 18:01:47.353 [INFO][3841] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.0/26 host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:47.417403 containerd[1571]: 2026-01-16 18:01:47.353 [INFO][3841] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.0/26 handle="k8s-pod-network.c4b30dc48c56ad2eee0831580d3ab1aa6365e09f7b7e203ab90bec761d19ac46" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:47.417403 containerd[1571]: 2026-01-16 18:01:47.355 [INFO][3841] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c4b30dc48c56ad2eee0831580d3ab1aa6365e09f7b7e203ab90bec761d19ac46 Jan 16 18:01:47.417403 containerd[1571]: 2026-01-16 18:01:47.360 [INFO][3841] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.0/26 handle="k8s-pod-network.c4b30dc48c56ad2eee0831580d3ab1aa6365e09f7b7e203ab90bec761d19ac46" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:47.417403 containerd[1571]: 2026-01-16 18:01:47.369 [INFO][3841] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.1/26] block=192.168.122.0/26 handle="k8s-pod-network.c4b30dc48c56ad2eee0831580d3ab1aa6365e09f7b7e203ab90bec761d19ac46" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:47.417403 containerd[1571]: 2026-01-16 18:01:47.369 [INFO][3841] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.1/26] handle="k8s-pod-network.c4b30dc48c56ad2eee0831580d3ab1aa6365e09f7b7e203ab90bec761d19ac46" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:47.417403 containerd[1571]: 2026-01-16 18:01:47.369 [INFO][3841] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 18:01:47.417403 containerd[1571]: 2026-01-16 18:01:47.370 [INFO][3841] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.1/26] IPv6=[] ContainerID="c4b30dc48c56ad2eee0831580d3ab1aa6365e09f7b7e203ab90bec761d19ac46" HandleID="k8s-pod-network.c4b30dc48c56ad2eee0831580d3ab1aa6365e09f7b7e203ab90bec761d19ac46" Workload="ci--4580--0--0--p--e4bb445d88-k8s-whisker--76db8c587c--rvj9m-eth0" Jan 16 18:01:47.419010 containerd[1571]: 2026-01-16 18:01:47.373 [INFO][3829] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c4b30dc48c56ad2eee0831580d3ab1aa6365e09f7b7e203ab90bec761d19ac46" Namespace="calico-system" Pod="whisker-76db8c587c-rvj9m" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-whisker--76db8c587c--rvj9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580--0--0--p--e4bb445d88-k8s-whisker--76db8c587c--rvj9m-eth0", GenerateName:"whisker-76db8c587c-", Namespace:"calico-system", SelfLink:"", UID:"53e714ba-c9e3-42df-ae04-537a6b215f2f", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 1, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76db8c587c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580-0-0-p-e4bb445d88", ContainerID:"", Pod:"whisker-76db8c587c-rvj9m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.122.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali35b8b7d4c28", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:01:47.419010 containerd[1571]: 2026-01-16 18:01:47.374 [INFO][3829] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.1/32] ContainerID="c4b30dc48c56ad2eee0831580d3ab1aa6365e09f7b7e203ab90bec761d19ac46" Namespace="calico-system" Pod="whisker-76db8c587c-rvj9m" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-whisker--76db8c587c--rvj9m-eth0" Jan 16 18:01:47.419422 containerd[1571]: 2026-01-16 18:01:47.374 [INFO][3829] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali35b8b7d4c28 ContainerID="c4b30dc48c56ad2eee0831580d3ab1aa6365e09f7b7e203ab90bec761d19ac46" Namespace="calico-system" Pod="whisker-76db8c587c-rvj9m" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-whisker--76db8c587c--rvj9m-eth0" Jan 16 18:01:47.419422 containerd[1571]: 2026-01-16 18:01:47.387 [INFO][3829] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c4b30dc48c56ad2eee0831580d3ab1aa6365e09f7b7e203ab90bec761d19ac46" Namespace="calico-system" Pod="whisker-76db8c587c-rvj9m" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-whisker--76db8c587c--rvj9m-eth0" Jan 16 18:01:47.419516 containerd[1571]: 2026-01-16 18:01:47.391 [INFO][3829] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c4b30dc48c56ad2eee0831580d3ab1aa6365e09f7b7e203ab90bec761d19ac46" Namespace="calico-system" Pod="whisker-76db8c587c-rvj9m" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-whisker--76db8c587c--rvj9m-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580--0--0--p--e4bb445d88-k8s-whisker--76db8c587c--rvj9m-eth0", GenerateName:"whisker-76db8c587c-", Namespace:"calico-system", SelfLink:"", UID:"53e714ba-c9e3-42df-ae04-537a6b215f2f", ResourceVersion:"906", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 1, 46, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"76db8c587c", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580-0-0-p-e4bb445d88", ContainerID:"c4b30dc48c56ad2eee0831580d3ab1aa6365e09f7b7e203ab90bec761d19ac46", Pod:"whisker-76db8c587c-rvj9m", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.122.1/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali35b8b7d4c28", MAC:"f6:ed:2b:0c:ad:58", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:01:47.419614 containerd[1571]: 2026-01-16 18:01:47.410 [INFO][3829] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c4b30dc48c56ad2eee0831580d3ab1aa6365e09f7b7e203ab90bec761d19ac46" Namespace="calico-system" Pod="whisker-76db8c587c-rvj9m" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-whisker--76db8c587c--rvj9m-eth0" Jan 16 18:01:47.468503 containerd[1571]: time="2026-01-16T18:01:47.468404321Z" level=info msg="connecting to shim c4b30dc48c56ad2eee0831580d3ab1aa6365e09f7b7e203ab90bec761d19ac46" address="unix:///run/containerd/s/b04c20a635a674f6727a825e44c03db99912783e63ab790cff7b0fa3cc16e1a4" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:01:47.530946 systemd[1]: Started cri-containerd-c4b30dc48c56ad2eee0831580d3ab1aa6365e09f7b7e203ab90bec761d19ac46.scope - libcontainer container c4b30dc48c56ad2eee0831580d3ab1aa6365e09f7b7e203ab90bec761d19ac46. Jan 16 18:01:47.566000 audit: BPF prog-id=173 op=LOAD Jan 16 18:01:47.567000 audit: BPF prog-id=174 op=LOAD Jan 16 18:01:47.567000 audit[3928]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=3907 pid=3928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:47.567000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334623330646334386335366164326565653038333135383064336162 Jan 16 18:01:47.568000 audit: BPF prog-id=174 op=UNLOAD Jan 16 18:01:47.568000 audit[3928]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3907 pid=3928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:47.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334623330646334386335366164326565653038333135383064336162 Jan 16 18:01:47.568000 audit: BPF prog-id=175 op=LOAD Jan 16 18:01:47.568000 audit[3928]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=3907 pid=3928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:47.568000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334623330646334386335366164326565653038333135383064336162 Jan 16 18:01:47.570000 audit: BPF prog-id=176 op=LOAD Jan 16 18:01:47.570000 audit[3928]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=3907 pid=3928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:47.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334623330646334386335366164326565653038333135383064336162 Jan 16 18:01:47.570000 audit: BPF prog-id=176 op=UNLOAD Jan 16 18:01:47.570000 audit[3928]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3907 pid=3928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:47.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334623330646334386335366164326565653038333135383064336162 Jan 16 18:01:47.570000 audit: BPF prog-id=175 op=UNLOAD Jan 16 18:01:47.570000 audit[3928]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3907 pid=3928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:47.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334623330646334386335366164326565653038333135383064336162 Jan 16 18:01:47.570000 audit: BPF prog-id=177 op=LOAD Jan 16 18:01:47.570000 audit[3928]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=3907 pid=3928 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:47.570000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6334623330646334386335366164326565653038333135383064336162 Jan 16 18:01:47.638752 containerd[1571]: time="2026-01-16T18:01:47.638679300Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-76db8c587c-rvj9m,Uid:53e714ba-c9e3-42df-ae04-537a6b215f2f,Namespace:calico-system,Attempt:0,} returns sandbox id \"c4b30dc48c56ad2eee0831580d3ab1aa6365e09f7b7e203ab90bec761d19ac46\"" Jan 16 18:01:47.644754 containerd[1571]: time="2026-01-16T18:01:47.644062712Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 18:01:47.747041 kubelet[2796]: I0116 18:01:47.746902 2796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 16 18:01:47.873000 audit: BPF prog-id=178 op=LOAD Jan 16 18:01:47.873000 audit[3998]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe02e3678 a2=98 a3=ffffe02e3668 items=0 ppid=3859 pid=3998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:47.873000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 18:01:47.873000 audit: BPF prog-id=178 op=UNLOAD Jan 16 18:01:47.873000 audit[3998]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe02e3648 a3=0 items=0 ppid=3859 pid=3998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:47.873000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 18:01:47.874000 audit: BPF prog-id=179 op=LOAD Jan 16 18:01:47.874000 audit[3998]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe02e3528 a2=74 a3=95 items=0 ppid=3859 pid=3998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:47.874000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 18:01:47.874000 audit: BPF prog-id=179 op=UNLOAD Jan 16 18:01:47.874000 audit[3998]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=3859 pid=3998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:47.874000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 18:01:47.874000 audit: BPF prog-id=180 op=LOAD Jan 16 18:01:47.874000 audit[3998]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe02e3558 a2=40 a3=ffffe02e3588 items=0 ppid=3859 pid=3998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:47.874000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 18:01:47.874000 audit: BPF prog-id=180 op=UNLOAD Jan 16 18:01:47.874000 audit[3998]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffe02e3588 items=0 ppid=3859 pid=3998 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:47.874000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 16 18:01:47.896000 audit: BPF prog-id=181 op=LOAD Jan 16 18:01:47.896000 audit[3999]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe4048f18 a2=98 a3=ffffe4048f08 items=0 ppid=3859 pid=3999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:47.896000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:01:47.899000 audit: BPF prog-id=181 op=UNLOAD Jan 16 18:01:47.899000 audit[3999]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe4048ee8 a3=0 items=0 ppid=3859 pid=3999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:47.899000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:01:47.901000 audit: BPF prog-id=182 op=LOAD Jan 16 18:01:47.901000 audit[3999]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe4048ba8 a2=74 a3=95 items=0 ppid=3859 pid=3999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:47.901000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:01:47.901000 audit: BPF prog-id=182 op=UNLOAD Jan 16 18:01:47.901000 audit[3999]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=3859 pid=3999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:47.901000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:01:47.902000 audit: BPF prog-id=183 op=LOAD Jan 16 18:01:47.902000 audit[3999]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe4048c08 a2=94 a3=2 items=0 ppid=3859 pid=3999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:47.902000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:01:47.902000 audit: BPF prog-id=183 op=UNLOAD Jan 16 18:01:47.902000 audit[3999]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=3859 pid=3999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:47.902000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:01:47.984651 containerd[1571]: time="2026-01-16T18:01:47.984576949Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:01:47.986479 containerd[1571]: time="2026-01-16T18:01:47.986288121Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 18:01:47.986479 containerd[1571]: time="2026-01-16T18:01:47.986416288Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 16 18:01:47.986795 kubelet[2796]: E0116 18:01:47.986674 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 18:01:47.986795 kubelet[2796]: E0116 18:01:47.986731 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 18:01:47.993660 kubelet[2796]: E0116 18:01:47.993554 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a6a4e115491b41f3bacfeb616e3df811,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ldhg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76db8c587c-rvj9m_calico-system(53e714ba-c9e3-42df-ae04-537a6b215f2f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 18:01:47.996934 containerd[1571]: time="2026-01-16T18:01:47.996446271Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 18:01:48.072000 audit: BPF prog-id=184 op=LOAD Jan 16 18:01:48.072000 audit[3999]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe4048bc8 a2=40 a3=ffffe4048bf8 items=0 ppid=3859 pid=3999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.072000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:01:48.072000 audit: BPF prog-id=184 op=UNLOAD Jan 16 18:01:48.072000 audit[3999]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffe4048bf8 items=0 ppid=3859 pid=3999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.072000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:01:48.083000 audit: BPF prog-id=185 op=LOAD Jan 16 18:01:48.083000 audit[3999]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe4048bd8 a2=94 a3=4 items=0 ppid=3859 pid=3999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.083000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:01:48.084000 audit: BPF prog-id=185 op=UNLOAD Jan 16 18:01:48.084000 audit[3999]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=3859 pid=3999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.084000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:01:48.084000 audit: BPF prog-id=186 op=LOAD Jan 16 18:01:48.084000 audit[3999]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe4048a18 a2=94 a3=5 items=0 ppid=3859 pid=3999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.084000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:01:48.084000 audit: BPF prog-id=186 op=UNLOAD Jan 16 18:01:48.084000 audit[3999]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=3859 pid=3999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.084000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:01:48.084000 audit: BPF prog-id=187 op=LOAD Jan 16 18:01:48.084000 audit[3999]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe4048c48 a2=94 a3=6 items=0 ppid=3859 pid=3999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.084000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:01:48.084000 audit: BPF prog-id=187 op=UNLOAD Jan 16 18:01:48.084000 audit[3999]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=3859 pid=3999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.084000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:01:48.084000 audit: BPF prog-id=188 op=LOAD Jan 16 18:01:48.084000 audit[3999]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe4048418 a2=94 a3=83 items=0 ppid=3859 pid=3999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.084000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:01:48.085000 audit: BPF prog-id=189 op=LOAD Jan 16 18:01:48.085000 audit[3999]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffe40481d8 a2=94 a3=2 items=0 ppid=3859 pid=3999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.085000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:01:48.085000 audit: BPF prog-id=189 op=UNLOAD Jan 16 18:01:48.085000 audit[3999]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=3859 pid=3999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.085000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:01:48.085000 audit: BPF prog-id=188 op=UNLOAD Jan 16 18:01:48.085000 audit[3999]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=9a48620 a3=9a3bb00 items=0 ppid=3859 pid=3999 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.085000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 16 18:01:48.100000 audit: BPF prog-id=190 op=LOAD Jan 16 18:01:48.100000 audit[4018]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffa8d8b38 a2=98 a3=fffffa8d8b28 items=0 ppid=3859 pid=4018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.100000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 18:01:48.101000 audit: BPF prog-id=190 op=UNLOAD Jan 16 18:01:48.101000 audit[4018]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffffa8d8b08 a3=0 items=0 ppid=3859 pid=4018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.101000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 18:01:48.101000 audit: BPF prog-id=191 op=LOAD Jan 16 18:01:48.101000 audit[4018]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffa8d89e8 a2=74 a3=95 items=0 ppid=3859 pid=4018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.101000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 18:01:48.101000 audit: BPF prog-id=191 op=UNLOAD Jan 16 18:01:48.101000 audit[4018]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=3859 pid=4018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.101000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 18:01:48.101000 audit: BPF prog-id=192 op=LOAD Jan 16 18:01:48.101000 audit[4018]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffffa8d8a18 a2=40 a3=fffffa8d8a48 items=0 ppid=3859 pid=4018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.101000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 18:01:48.101000 audit: BPF prog-id=192 op=UNLOAD Jan 16 18:01:48.101000 audit[4018]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffffa8d8a48 items=0 ppid=3859 pid=4018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.101000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 16 18:01:48.187356 systemd-networkd[1471]: vxlan.calico: Link UP Jan 16 18:01:48.187378 systemd-networkd[1471]: vxlan.calico: Gained carrier Jan 16 18:01:48.227000 audit: BPF prog-id=193 op=LOAD Jan 16 18:01:48.227000 audit[4046]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcbcd8d58 a2=98 a3=ffffcbcd8d48 items=0 ppid=3859 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.227000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:01:48.228000 audit: BPF prog-id=193 op=UNLOAD Jan 16 18:01:48.228000 audit[4046]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffcbcd8d28 a3=0 items=0 ppid=3859 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.228000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:01:48.228000 audit: BPF prog-id=194 op=LOAD Jan 16 18:01:48.228000 audit[4046]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcbcd8a38 a2=74 a3=95 items=0 ppid=3859 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.228000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:01:48.228000 audit: BPF prog-id=194 op=UNLOAD Jan 16 18:01:48.228000 audit[4046]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=3859 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.228000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:01:48.228000 audit: BPF prog-id=195 op=LOAD Jan 16 18:01:48.228000 audit[4046]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffcbcd8a98 a2=94 a3=2 items=0 ppid=3859 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.228000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:01:48.228000 audit: BPF prog-id=195 op=UNLOAD Jan 16 18:01:48.228000 audit[4046]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=3859 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.228000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:01:48.228000 audit: BPF prog-id=196 op=LOAD Jan 16 18:01:48.228000 audit[4046]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcbcd8918 a2=40 a3=ffffcbcd8948 items=0 ppid=3859 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.228000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:01:48.228000 audit: BPF prog-id=196 op=UNLOAD Jan 16 18:01:48.228000 audit[4046]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffcbcd8948 items=0 ppid=3859 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.228000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:01:48.228000 audit: BPF prog-id=197 op=LOAD Jan 16 18:01:48.228000 audit[4046]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcbcd8a68 a2=94 a3=b7 items=0 ppid=3859 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.228000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:01:48.229000 audit: BPF prog-id=197 op=UNLOAD Jan 16 18:01:48.229000 audit[4046]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=3859 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.229000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:01:48.230000 audit: BPF prog-id=198 op=LOAD Jan 16 18:01:48.230000 audit[4046]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcbcd8118 a2=94 a3=2 items=0 ppid=3859 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.230000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:01:48.232000 audit: BPF prog-id=198 op=UNLOAD Jan 16 18:01:48.232000 audit[4046]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=3859 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.232000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:01:48.232000 audit: BPF prog-id=199 op=LOAD Jan 16 18:01:48.232000 audit[4046]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffcbcd82a8 a2=94 a3=30 items=0 ppid=3859 pid=4046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.232000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 16 18:01:48.239000 audit: BPF prog-id=200 op=LOAD Jan 16 18:01:48.239000 audit[4049]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc14830e8 a2=98 a3=ffffc14830d8 items=0 ppid=3859 pid=4049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.239000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:01:48.239000 audit: BPF prog-id=200 op=UNLOAD Jan 16 18:01:48.239000 audit[4049]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc14830b8 a3=0 items=0 ppid=3859 pid=4049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.239000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:01:48.239000 audit: BPF prog-id=201 op=LOAD Jan 16 18:01:48.239000 audit[4049]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc1482d78 a2=74 a3=95 items=0 ppid=3859 pid=4049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.239000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:01:48.239000 audit: BPF prog-id=201 op=UNLOAD Jan 16 18:01:48.239000 audit[4049]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=3859 pid=4049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.239000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:01:48.239000 audit: BPF prog-id=202 op=LOAD Jan 16 18:01:48.239000 audit[4049]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc1482dd8 a2=94 a3=2 items=0 ppid=3859 pid=4049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.239000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:01:48.239000 audit: BPF prog-id=202 op=UNLOAD Jan 16 18:01:48.239000 audit[4049]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=3859 pid=4049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.239000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:01:48.348122 containerd[1571]: time="2026-01-16T18:01:48.347833514Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:01:48.350007 containerd[1571]: time="2026-01-16T18:01:48.349931824Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 18:01:48.351188 containerd[1571]: time="2026-01-16T18:01:48.349980667Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 16 18:01:48.351282 kubelet[2796]: E0116 18:01:48.350851 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 18:01:48.351282 kubelet[2796]: E0116 18:01:48.350897 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 18:01:48.351358 kubelet[2796]: E0116 18:01:48.351018 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ldhg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76db8c587c-rvj9m_calico-system(53e714ba-c9e3-42df-ae04-537a6b215f2f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 18:01:48.353158 kubelet[2796]: E0116 18:01:48.353107 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76db8c587c-rvj9m" podUID="53e714ba-c9e3-42df-ae04-537a6b215f2f" Jan 16 18:01:48.368000 audit: BPF prog-id=203 op=LOAD Jan 16 18:01:48.368000 audit[4049]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc1482d98 a2=40 a3=ffffc1482dc8 items=0 ppid=3859 pid=4049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.368000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:01:48.368000 audit: BPF prog-id=203 op=UNLOAD Jan 16 18:01:48.368000 audit[4049]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffc1482dc8 items=0 ppid=3859 pid=4049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.368000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:01:48.379000 audit: BPF prog-id=204 op=LOAD Jan 16 18:01:48.379000 audit[4049]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc1482da8 a2=94 a3=4 items=0 ppid=3859 pid=4049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.379000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:01:48.379000 audit: BPF prog-id=204 op=UNLOAD Jan 16 18:01:48.379000 audit[4049]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=3859 pid=4049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.379000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:01:48.380000 audit: BPF prog-id=205 op=LOAD Jan 16 18:01:48.380000 audit[4049]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc1482be8 a2=94 a3=5 items=0 ppid=3859 pid=4049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.380000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:01:48.380000 audit: BPF prog-id=205 op=UNLOAD Jan 16 18:01:48.380000 audit[4049]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=3859 pid=4049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.380000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:01:48.380000 audit: BPF prog-id=206 op=LOAD Jan 16 18:01:48.380000 audit[4049]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc1482e18 a2=94 a3=6 items=0 ppid=3859 pid=4049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.380000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:01:48.380000 audit: BPF prog-id=206 op=UNLOAD Jan 16 18:01:48.380000 audit[4049]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=3859 pid=4049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.380000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:01:48.380000 audit: BPF prog-id=207 op=LOAD Jan 16 18:01:48.380000 audit[4049]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc14825e8 a2=94 a3=83 items=0 ppid=3859 pid=4049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.380000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:01:48.380000 audit: BPF prog-id=208 op=LOAD Jan 16 18:01:48.380000 audit[4049]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffc14823a8 a2=94 a3=2 items=0 ppid=3859 pid=4049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.380000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:01:48.380000 audit: BPF prog-id=208 op=UNLOAD Jan 16 18:01:48.380000 audit[4049]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=3859 pid=4049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.380000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:01:48.381000 audit: BPF prog-id=207 op=UNLOAD Jan 16 18:01:48.381000 audit[4049]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=23320620 a3=23313b00 items=0 ppid=3859 pid=4049 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.381000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 16 18:01:48.394000 audit: BPF prog-id=199 op=UNLOAD Jan 16 18:01:48.394000 audit[3859]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4001394c00 a2=0 a3=0 items=0 ppid=3847 pid=3859 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.394000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 16 18:01:48.468000 audit[4072]: NETFILTER_CFG table=nat:123 family=2 entries=15 op=nft_register_chain pid=4072 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:01:48.468000 audit[4072]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffc8462d70 a2=0 a3=ffffbbb6dfa8 items=0 ppid=3859 pid=4072 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.468000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:01:48.474000 audit[4075]: NETFILTER_CFG table=mangle:124 family=2 entries=16 op=nft_register_chain pid=4075 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:01:48.474000 audit[4075]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffdfa852d0 a2=0 a3=ffffb08e6fa8 items=0 ppid=3859 pid=4075 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.474000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:01:48.477000 audit[4073]: NETFILTER_CFG table=raw:125 family=2 entries=21 op=nft_register_chain pid=4073 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:01:48.477000 audit[4073]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffde996e90 a2=0 a3=ffff8654bfa8 items=0 ppid=3859 pid=4073 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.477000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:01:48.500210 kubelet[2796]: I0116 18:01:48.500141 2796 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2cd278e-9655-4f08-a8bd-cf1ad09f05be" path="/var/lib/kubelet/pods/f2cd278e-9655-4f08-a8bd-cf1ad09f05be/volumes" Jan 16 18:01:48.486000 audit[4076]: NETFILTER_CFG table=filter:126 family=2 entries=94 op=nft_register_chain pid=4076 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:01:48.486000 audit[4076]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=53116 a0=3 a1=ffffcbca73f0 a2=0 a3=ffff90a60fa8 items=0 ppid=3859 pid=4076 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.486000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:01:48.564012 systemd-networkd[1471]: cali35b8b7d4c28: Gained IPv6LL Jan 16 18:01:48.753710 kubelet[2796]: E0116 18:01:48.753527 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76db8c587c-rvj9m" podUID="53e714ba-c9e3-42df-ae04-537a6b215f2f" Jan 16 18:01:48.783000 audit[4087]: NETFILTER_CFG table=filter:127 family=2 entries=20 op=nft_register_rule pid=4087 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:48.783000 audit[4087]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffc7d89540 a2=0 a3=1 items=0 ppid=2936 pid=4087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.783000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:48.788000 audit[4087]: NETFILTER_CFG table=nat:128 family=2 entries=14 op=nft_register_rule pid=4087 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:48.788000 audit[4087]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffc7d89540 a2=0 a3=1 items=0 ppid=2936 pid=4087 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:48.788000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:49.332008 systemd-networkd[1471]: vxlan.calico: Gained IPv6LL Jan 16 18:01:52.497081 containerd[1571]: time="2026-01-16T18:01:52.496995353Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b59648fd-qhvz8,Uid:9b676f4f-1816-4b2f-88ec-512b756c1b31,Namespace:calico-apiserver,Attempt:0,}" Jan 16 18:01:52.666555 systemd-networkd[1471]: cali9fdc2219f9f: Link UP Jan 16 18:01:52.667912 systemd-networkd[1471]: cali9fdc2219f9f: Gained carrier Jan 16 18:01:52.699276 containerd[1571]: 2026-01-16 18:01:52.562 [INFO][4091] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4580--0--0--p--e4bb445d88-k8s-calico--apiserver--66b59648fd--qhvz8-eth0 calico-apiserver-66b59648fd- calico-apiserver 9b676f4f-1816-4b2f-88ec-512b756c1b31 840 0 2026-01-16 18:01:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66b59648fd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4580-0-0-p-e4bb445d88 calico-apiserver-66b59648fd-qhvz8 eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali9fdc2219f9f [] [] }} ContainerID="f3d33fc1b49a30c2cc1403e85d00b7d916f84182c1edc272f98c06b88c2c98c2" Namespace="calico-apiserver" Pod="calico-apiserver-66b59648fd-qhvz8" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-calico--apiserver--66b59648fd--qhvz8-" Jan 16 18:01:52.699276 containerd[1571]: 2026-01-16 18:01:52.563 [INFO][4091] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f3d33fc1b49a30c2cc1403e85d00b7d916f84182c1edc272f98c06b88c2c98c2" Namespace="calico-apiserver" Pod="calico-apiserver-66b59648fd-qhvz8" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-calico--apiserver--66b59648fd--qhvz8-eth0" Jan 16 18:01:52.699276 containerd[1571]: 2026-01-16 18:01:52.603 [INFO][4103] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f3d33fc1b49a30c2cc1403e85d00b7d916f84182c1edc272f98c06b88c2c98c2" HandleID="k8s-pod-network.f3d33fc1b49a30c2cc1403e85d00b7d916f84182c1edc272f98c06b88c2c98c2" Workload="ci--4580--0--0--p--e4bb445d88-k8s-calico--apiserver--66b59648fd--qhvz8-eth0" Jan 16 18:01:52.699532 containerd[1571]: 2026-01-16 18:01:52.603 [INFO][4103] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f3d33fc1b49a30c2cc1403e85d00b7d916f84182c1edc272f98c06b88c2c98c2" HandleID="k8s-pod-network.f3d33fc1b49a30c2cc1403e85d00b7d916f84182c1edc272f98c06b88c2c98c2" Workload="ci--4580--0--0--p--e4bb445d88-k8s-calico--apiserver--66b59648fd--qhvz8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d3000), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4580-0-0-p-e4bb445d88", "pod":"calico-apiserver-66b59648fd-qhvz8", "timestamp":"2026-01-16 18:01:52.603081054 +0000 UTC"}, Hostname:"ci-4580-0-0-p-e4bb445d88", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 18:01:52.699532 containerd[1571]: 2026-01-16 18:01:52.603 [INFO][4103] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 18:01:52.699532 containerd[1571]: 2026-01-16 18:01:52.603 [INFO][4103] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 18:01:52.699532 containerd[1571]: 2026-01-16 18:01:52.603 [INFO][4103] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4580-0-0-p-e4bb445d88' Jan 16 18:01:52.699532 containerd[1571]: 2026-01-16 18:01:52.615 [INFO][4103] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f3d33fc1b49a30c2cc1403e85d00b7d916f84182c1edc272f98c06b88c2c98c2" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:52.699532 containerd[1571]: 2026-01-16 18:01:52.623 [INFO][4103] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:52.699532 containerd[1571]: 2026-01-16 18:01:52.631 [INFO][4103] ipam/ipam.go 511: Trying affinity for 192.168.122.0/26 host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:52.699532 containerd[1571]: 2026-01-16 18:01:52.634 [INFO][4103] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.0/26 host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:52.699532 containerd[1571]: 2026-01-16 18:01:52.638 [INFO][4103] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.0/26 host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:52.701419 containerd[1571]: 2026-01-16 18:01:52.638 [INFO][4103] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.0/26 handle="k8s-pod-network.f3d33fc1b49a30c2cc1403e85d00b7d916f84182c1edc272f98c06b88c2c98c2" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:52.701419 containerd[1571]: 2026-01-16 18:01:52.640 [INFO][4103] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f3d33fc1b49a30c2cc1403e85d00b7d916f84182c1edc272f98c06b88c2c98c2 Jan 16 18:01:52.701419 containerd[1571]: 2026-01-16 18:01:52.647 [INFO][4103] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.0/26 handle="k8s-pod-network.f3d33fc1b49a30c2cc1403e85d00b7d916f84182c1edc272f98c06b88c2c98c2" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:52.701419 containerd[1571]: 2026-01-16 18:01:52.658 [INFO][4103] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.2/26] block=192.168.122.0/26 handle="k8s-pod-network.f3d33fc1b49a30c2cc1403e85d00b7d916f84182c1edc272f98c06b88c2c98c2" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:52.701419 containerd[1571]: 2026-01-16 18:01:52.658 [INFO][4103] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.2/26] handle="k8s-pod-network.f3d33fc1b49a30c2cc1403e85d00b7d916f84182c1edc272f98c06b88c2c98c2" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:52.701419 containerd[1571]: 2026-01-16 18:01:52.658 [INFO][4103] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 18:01:52.701419 containerd[1571]: 2026-01-16 18:01:52.658 [INFO][4103] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.2/26] IPv6=[] ContainerID="f3d33fc1b49a30c2cc1403e85d00b7d916f84182c1edc272f98c06b88c2c98c2" HandleID="k8s-pod-network.f3d33fc1b49a30c2cc1403e85d00b7d916f84182c1edc272f98c06b88c2c98c2" Workload="ci--4580--0--0--p--e4bb445d88-k8s-calico--apiserver--66b59648fd--qhvz8-eth0" Jan 16 18:01:52.704024 containerd[1571]: 2026-01-16 18:01:52.661 [INFO][4091] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f3d33fc1b49a30c2cc1403e85d00b7d916f84182c1edc272f98c06b88c2c98c2" Namespace="calico-apiserver" Pod="calico-apiserver-66b59648fd-qhvz8" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-calico--apiserver--66b59648fd--qhvz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580--0--0--p--e4bb445d88-k8s-calico--apiserver--66b59648fd--qhvz8-eth0", GenerateName:"calico-apiserver-66b59648fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"9b676f4f-1816-4b2f-88ec-512b756c1b31", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 1, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66b59648fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580-0-0-p-e4bb445d88", ContainerID:"", Pod:"calico-apiserver-66b59648fd-qhvz8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9fdc2219f9f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:01:52.704323 containerd[1571]: 2026-01-16 18:01:52.661 [INFO][4091] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.2/32] ContainerID="f3d33fc1b49a30c2cc1403e85d00b7d916f84182c1edc272f98c06b88c2c98c2" Namespace="calico-apiserver" Pod="calico-apiserver-66b59648fd-qhvz8" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-calico--apiserver--66b59648fd--qhvz8-eth0" Jan 16 18:01:52.704323 containerd[1571]: 2026-01-16 18:01:52.661 [INFO][4091] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9fdc2219f9f ContainerID="f3d33fc1b49a30c2cc1403e85d00b7d916f84182c1edc272f98c06b88c2c98c2" Namespace="calico-apiserver" Pod="calico-apiserver-66b59648fd-qhvz8" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-calico--apiserver--66b59648fd--qhvz8-eth0" Jan 16 18:01:52.704323 containerd[1571]: 2026-01-16 18:01:52.667 [INFO][4091] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f3d33fc1b49a30c2cc1403e85d00b7d916f84182c1edc272f98c06b88c2c98c2" Namespace="calico-apiserver" Pod="calico-apiserver-66b59648fd-qhvz8" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-calico--apiserver--66b59648fd--qhvz8-eth0" Jan 16 18:01:52.704401 containerd[1571]: 2026-01-16 18:01:52.668 [INFO][4091] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f3d33fc1b49a30c2cc1403e85d00b7d916f84182c1edc272f98c06b88c2c98c2" Namespace="calico-apiserver" Pod="calico-apiserver-66b59648fd-qhvz8" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-calico--apiserver--66b59648fd--qhvz8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580--0--0--p--e4bb445d88-k8s-calico--apiserver--66b59648fd--qhvz8-eth0", GenerateName:"calico-apiserver-66b59648fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"9b676f4f-1816-4b2f-88ec-512b756c1b31", ResourceVersion:"840", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 1, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66b59648fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580-0-0-p-e4bb445d88", ContainerID:"f3d33fc1b49a30c2cc1403e85d00b7d916f84182c1edc272f98c06b88c2c98c2", Pod:"calico-apiserver-66b59648fd-qhvz8", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.2/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali9fdc2219f9f", MAC:"0a:2a:f0:32:ce:d5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:01:52.704458 containerd[1571]: 2026-01-16 18:01:52.693 [INFO][4091] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f3d33fc1b49a30c2cc1403e85d00b7d916f84182c1edc272f98c06b88c2c98c2" Namespace="calico-apiserver" Pod="calico-apiserver-66b59648fd-qhvz8" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-calico--apiserver--66b59648fd--qhvz8-eth0" Jan 16 18:01:52.727071 kernel: kauditd_printk_skb: 231 callbacks suppressed Jan 16 18:01:52.727236 kernel: audit: type=1325 audit(1768586512.721:651): table=filter:129 family=2 entries=50 op=nft_register_chain pid=4118 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:01:52.721000 audit[4118]: NETFILTER_CFG table=filter:129 family=2 entries=50 op=nft_register_chain pid=4118 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:01:52.721000 audit[4118]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28208 a0=3 a1=ffffcddf63e0 a2=0 a3=ffffbe128fa8 items=0 ppid=3859 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:52.736140 kernel: audit: type=1300 audit(1768586512.721:651): arch=c00000b7 syscall=211 success=yes exit=28208 a0=3 a1=ffffcddf63e0 a2=0 a3=ffffbe128fa8 items=0 ppid=3859 pid=4118 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:52.736412 kernel: audit: type=1327 audit(1768586512.721:651): proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:01:52.721000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:01:52.741655 containerd[1571]: time="2026-01-16T18:01:52.741308197Z" level=info msg="connecting to shim f3d33fc1b49a30c2cc1403e85d00b7d916f84182c1edc272f98c06b88c2c98c2" address="unix:///run/containerd/s/3af174eebc1b3aba3dd8ea986873517244b7f986eaf6ccfeb5cce5b1e29aacf3" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:01:52.783016 systemd[1]: Started cri-containerd-f3d33fc1b49a30c2cc1403e85d00b7d916f84182c1edc272f98c06b88c2c98c2.scope - libcontainer container f3d33fc1b49a30c2cc1403e85d00b7d916f84182c1edc272f98c06b88c2c98c2. Jan 16 18:01:52.810000 audit: BPF prog-id=209 op=LOAD Jan 16 18:01:52.812732 kernel: audit: type=1334 audit(1768586512.810:652): prog-id=209 op=LOAD Jan 16 18:01:52.813000 audit: BPF prog-id=210 op=LOAD Jan 16 18:01:52.813000 audit[4139]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4127 pid=4139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:52.817068 kernel: audit: type=1334 audit(1768586512.813:653): prog-id=210 op=LOAD Jan 16 18:01:52.817178 kernel: audit: type=1300 audit(1768586512.813:653): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4127 pid=4139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:52.817232 kernel: audit: type=1327 audit(1768586512.813:653): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633643333666331623439613330633263633134303365383564303062 Jan 16 18:01:52.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633643333666331623439613330633263633134303365383564303062 Jan 16 18:01:52.813000 audit: BPF prog-id=210 op=UNLOAD Jan 16 18:01:52.813000 audit[4139]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4127 pid=4139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:52.821860 kernel: audit: type=1334 audit(1768586512.813:654): prog-id=210 op=UNLOAD Jan 16 18:01:52.821961 kernel: audit: type=1300 audit(1768586512.813:654): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4127 pid=4139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:52.821994 kernel: audit: type=1327 audit(1768586512.813:654): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633643333666331623439613330633263633134303365383564303062 Jan 16 18:01:52.813000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633643333666331623439613330633263633134303365383564303062 Jan 16 18:01:52.814000 audit: BPF prog-id=211 op=LOAD Jan 16 18:01:52.814000 audit[4139]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4127 pid=4139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:52.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633643333666331623439613330633263633134303365383564303062 Jan 16 18:01:52.814000 audit: BPF prog-id=212 op=LOAD Jan 16 18:01:52.814000 audit[4139]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4127 pid=4139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:52.814000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633643333666331623439613330633263633134303365383564303062 Jan 16 18:01:52.818000 audit: BPF prog-id=212 op=UNLOAD Jan 16 18:01:52.818000 audit[4139]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4127 pid=4139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:52.818000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633643333666331623439613330633263633134303365383564303062 Jan 16 18:01:52.818000 audit: BPF prog-id=211 op=UNLOAD Jan 16 18:01:52.818000 audit[4139]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4127 pid=4139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:52.818000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633643333666331623439613330633263633134303365383564303062 Jan 16 18:01:52.818000 audit: BPF prog-id=213 op=LOAD Jan 16 18:01:52.818000 audit[4139]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4127 pid=4139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:52.818000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6633643333666331623439613330633263633134303365383564303062 Jan 16 18:01:52.854938 containerd[1571]: time="2026-01-16T18:01:52.854868572Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b59648fd-qhvz8,Uid:9b676f4f-1816-4b2f-88ec-512b756c1b31,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f3d33fc1b49a30c2cc1403e85d00b7d916f84182c1edc272f98c06b88c2c98c2\"" Jan 16 18:01:52.856853 containerd[1571]: time="2026-01-16T18:01:52.856785383Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 18:01:53.206987 containerd[1571]: time="2026-01-16T18:01:53.206917939Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:01:53.209300 containerd[1571]: time="2026-01-16T18:01:53.209239526Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 18:01:53.209473 containerd[1571]: time="2026-01-16T18:01:53.209355452Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 18:01:53.209876 kubelet[2796]: E0116 18:01:53.209797 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:01:53.209876 kubelet[2796]: E0116 18:01:53.209850 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:01:53.211761 kubelet[2796]: E0116 18:01:53.210315 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tw62r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66b59648fd-qhvz8_calico-apiserver(9b676f4f-1816-4b2f-88ec-512b756c1b31): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 18:01:53.211761 kubelet[2796]: E0116 18:01:53.211777 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-qhvz8" podUID="9b676f4f-1816-4b2f-88ec-512b756c1b31" Jan 16 18:01:53.500942 containerd[1571]: time="2026-01-16T18:01:53.498330932Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-flqcp,Uid:99950396-08ab-48b3-8770-e8de9a7d70c4,Namespace:kube-system,Attempt:0,}" Jan 16 18:01:53.500942 containerd[1571]: time="2026-01-16T18:01:53.498493419Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b59648fd-pnwsh,Uid:32aa7e73-83e3-4413-b798-e52a9caaa69f,Namespace:calico-apiserver,Attempt:0,}" Jan 16 18:01:53.501541 containerd[1571]: time="2026-01-16T18:01:53.501468596Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dq7g7,Uid:b5830553-7e67-4092-9e5b-09dd5a3e6768,Namespace:kube-system,Attempt:0,}" Jan 16 18:01:53.761771 systemd-networkd[1471]: cali5b424fa3b78: Link UP Jan 16 18:01:53.763132 systemd-networkd[1471]: cali5b424fa3b78: Gained carrier Jan 16 18:01:53.777592 kubelet[2796]: E0116 18:01:53.777366 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-qhvz8" podUID="9b676f4f-1816-4b2f-88ec-512b756c1b31" Jan 16 18:01:53.799178 containerd[1571]: 2026-01-16 18:01:53.631 [INFO][4183] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4580--0--0--p--e4bb445d88-k8s-coredns--668d6bf9bc--dq7g7-eth0 coredns-668d6bf9bc- kube-system b5830553-7e67-4092-9e5b-09dd5a3e6768 833 0 2026-01-16 18:01:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4580-0-0-p-e4bb445d88 coredns-668d6bf9bc-dq7g7 eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali5b424fa3b78 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="e4d55512e85a84b61e2b6f02d22809e69a3fa62be533df755eeaff7f1549c2ac" Namespace="kube-system" Pod="coredns-668d6bf9bc-dq7g7" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-coredns--668d6bf9bc--dq7g7-" Jan 16 18:01:53.799178 containerd[1571]: 2026-01-16 18:01:53.631 [INFO][4183] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e4d55512e85a84b61e2b6f02d22809e69a3fa62be533df755eeaff7f1549c2ac" Namespace="kube-system" Pod="coredns-668d6bf9bc-dq7g7" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-coredns--668d6bf9bc--dq7g7-eth0" Jan 16 18:01:53.799178 containerd[1571]: 2026-01-16 18:01:53.685 [INFO][4213] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4d55512e85a84b61e2b6f02d22809e69a3fa62be533df755eeaff7f1549c2ac" HandleID="k8s-pod-network.e4d55512e85a84b61e2b6f02d22809e69a3fa62be533df755eeaff7f1549c2ac" Workload="ci--4580--0--0--p--e4bb445d88-k8s-coredns--668d6bf9bc--dq7g7-eth0" Jan 16 18:01:53.799796 containerd[1571]: 2026-01-16 18:01:53.686 [INFO][4213] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e4d55512e85a84b61e2b6f02d22809e69a3fa62be533df755eeaff7f1549c2ac" HandleID="k8s-pod-network.e4d55512e85a84b61e2b6f02d22809e69a3fa62be533df755eeaff7f1549c2ac" Workload="ci--4580--0--0--p--e4bb445d88-k8s-coredns--668d6bf9bc--dq7g7-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b200), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4580-0-0-p-e4bb445d88", "pod":"coredns-668d6bf9bc-dq7g7", "timestamp":"2026-01-16 18:01:53.685798933 +0000 UTC"}, Hostname:"ci-4580-0-0-p-e4bb445d88", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 18:01:53.799796 containerd[1571]: 2026-01-16 18:01:53.686 [INFO][4213] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 18:01:53.799796 containerd[1571]: 2026-01-16 18:01:53.686 [INFO][4213] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 18:01:53.799796 containerd[1571]: 2026-01-16 18:01:53.686 [INFO][4213] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4580-0-0-p-e4bb445d88' Jan 16 18:01:53.799796 containerd[1571]: 2026-01-16 18:01:53.700 [INFO][4213] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e4d55512e85a84b61e2b6f02d22809e69a3fa62be533df755eeaff7f1549c2ac" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:53.799796 containerd[1571]: 2026-01-16 18:01:53.711 [INFO][4213] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:53.799796 containerd[1571]: 2026-01-16 18:01:53.723 [INFO][4213] ipam/ipam.go 511: Trying affinity for 192.168.122.0/26 host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:53.799796 containerd[1571]: 2026-01-16 18:01:53.728 [INFO][4213] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.0/26 host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:53.799796 containerd[1571]: 2026-01-16 18:01:53.732 [INFO][4213] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.0/26 host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:53.800270 containerd[1571]: 2026-01-16 18:01:53.732 [INFO][4213] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.0/26 handle="k8s-pod-network.e4d55512e85a84b61e2b6f02d22809e69a3fa62be533df755eeaff7f1549c2ac" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:53.800270 containerd[1571]: 2026-01-16 18:01:53.734 [INFO][4213] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e4d55512e85a84b61e2b6f02d22809e69a3fa62be533df755eeaff7f1549c2ac Jan 16 18:01:53.800270 containerd[1571]: 2026-01-16 18:01:53.741 [INFO][4213] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.0/26 handle="k8s-pod-network.e4d55512e85a84b61e2b6f02d22809e69a3fa62be533df755eeaff7f1549c2ac" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:53.800270 containerd[1571]: 2026-01-16 18:01:53.754 [INFO][4213] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.3/26] block=192.168.122.0/26 handle="k8s-pod-network.e4d55512e85a84b61e2b6f02d22809e69a3fa62be533df755eeaff7f1549c2ac" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:53.800270 containerd[1571]: 2026-01-16 18:01:53.755 [INFO][4213] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.3/26] handle="k8s-pod-network.e4d55512e85a84b61e2b6f02d22809e69a3fa62be533df755eeaff7f1549c2ac" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:53.800270 containerd[1571]: 2026-01-16 18:01:53.755 [INFO][4213] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 18:01:53.800270 containerd[1571]: 2026-01-16 18:01:53.756 [INFO][4213] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.3/26] IPv6=[] ContainerID="e4d55512e85a84b61e2b6f02d22809e69a3fa62be533df755eeaff7f1549c2ac" HandleID="k8s-pod-network.e4d55512e85a84b61e2b6f02d22809e69a3fa62be533df755eeaff7f1549c2ac" Workload="ci--4580--0--0--p--e4bb445d88-k8s-coredns--668d6bf9bc--dq7g7-eth0" Jan 16 18:01:53.800816 containerd[1571]: 2026-01-16 18:01:53.758 [INFO][4183] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e4d55512e85a84b61e2b6f02d22809e69a3fa62be533df755eeaff7f1549c2ac" Namespace="kube-system" Pod="coredns-668d6bf9bc-dq7g7" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-coredns--668d6bf9bc--dq7g7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580--0--0--p--e4bb445d88-k8s-coredns--668d6bf9bc--dq7g7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b5830553-7e67-4092-9e5b-09dd5a3e6768", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 1, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580-0-0-p-e4bb445d88", ContainerID:"", Pod:"coredns-668d6bf9bc-dq7g7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5b424fa3b78", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:01:53.800816 containerd[1571]: 2026-01-16 18:01:53.758 [INFO][4183] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.3/32] ContainerID="e4d55512e85a84b61e2b6f02d22809e69a3fa62be533df755eeaff7f1549c2ac" Namespace="kube-system" Pod="coredns-668d6bf9bc-dq7g7" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-coredns--668d6bf9bc--dq7g7-eth0" Jan 16 18:01:53.800816 containerd[1571]: 2026-01-16 18:01:53.758 [INFO][4183] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali5b424fa3b78 ContainerID="e4d55512e85a84b61e2b6f02d22809e69a3fa62be533df755eeaff7f1549c2ac" Namespace="kube-system" Pod="coredns-668d6bf9bc-dq7g7" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-coredns--668d6bf9bc--dq7g7-eth0" Jan 16 18:01:53.800816 containerd[1571]: 2026-01-16 18:01:53.769 [INFO][4183] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e4d55512e85a84b61e2b6f02d22809e69a3fa62be533df755eeaff7f1549c2ac" Namespace="kube-system" Pod="coredns-668d6bf9bc-dq7g7" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-coredns--668d6bf9bc--dq7g7-eth0" Jan 16 18:01:53.800816 containerd[1571]: 2026-01-16 18:01:53.771 [INFO][4183] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e4d55512e85a84b61e2b6f02d22809e69a3fa62be533df755eeaff7f1549c2ac" Namespace="kube-system" Pod="coredns-668d6bf9bc-dq7g7" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-coredns--668d6bf9bc--dq7g7-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580--0--0--p--e4bb445d88-k8s-coredns--668d6bf9bc--dq7g7-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"b5830553-7e67-4092-9e5b-09dd5a3e6768", ResourceVersion:"833", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 1, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580-0-0-p-e4bb445d88", ContainerID:"e4d55512e85a84b61e2b6f02d22809e69a3fa62be533df755eeaff7f1549c2ac", Pod:"coredns-668d6bf9bc-dq7g7", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.3/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali5b424fa3b78", MAC:"8a:f6:07:a2:a0:74", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:01:53.800816 containerd[1571]: 2026-01-16 18:01:53.795 [INFO][4183] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e4d55512e85a84b61e2b6f02d22809e69a3fa62be533df755eeaff7f1549c2ac" Namespace="kube-system" Pod="coredns-668d6bf9bc-dq7g7" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-coredns--668d6bf9bc--dq7g7-eth0" Jan 16 18:01:53.854565 containerd[1571]: time="2026-01-16T18:01:53.853566706Z" level=info msg="connecting to shim e4d55512e85a84b61e2b6f02d22809e69a3fa62be533df755eeaff7f1549c2ac" address="unix:///run/containerd/s/ce3cd74fdcf4d4b1f2bf35ab93d60ece10a369cb48c5d476615f39105893a49f" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:01:53.913168 systemd[1]: Started cri-containerd-e4d55512e85a84b61e2b6f02d22809e69a3fa62be533df755eeaff7f1549c2ac.scope - libcontainer container e4d55512e85a84b61e2b6f02d22809e69a3fa62be533df755eeaff7f1549c2ac. Jan 16 18:01:53.910000 audit[4272]: NETFILTER_CFG table=filter:130 family=2 entries=20 op=nft_register_rule pid=4272 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:53.910000 audit[4272]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffdbef1d50 a2=0 a3=1 items=0 ppid=2936 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:53.910000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:53.918000 audit[4272]: NETFILTER_CFG table=nat:131 family=2 entries=14 op=nft_register_rule pid=4272 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:53.925852 systemd-networkd[1471]: cali89bacfe506a: Link UP Jan 16 18:01:53.918000 audit[4272]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffdbef1d50 a2=0 a3=1 items=0 ppid=2936 pid=4272 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:53.918000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:53.928414 systemd-networkd[1471]: cali89bacfe506a: Gained carrier Jan 16 18:01:53.948000 audit: BPF prog-id=214 op=LOAD Jan 16 18:01:53.949000 audit: BPF prog-id=215 op=LOAD Jan 16 18:01:53.949000 audit[4258]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106180 a2=98 a3=0 items=0 ppid=4244 pid=4258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:53.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534643535353132653835613834623631653262366630326432323830 Jan 16 18:01:53.949000 audit: BPF prog-id=215 op=UNLOAD Jan 16 18:01:53.949000 audit[4258]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4244 pid=4258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:53.949000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534643535353132653835613834623631653262366630326432323830 Jan 16 18:01:53.951000 audit: BPF prog-id=216 op=LOAD Jan 16 18:01:53.951000 audit[4258]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001063e8 a2=98 a3=0 items=0 ppid=4244 pid=4258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:53.951000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534643535353132653835613834623631653262366630326432323830 Jan 16 18:01:53.953000 audit: BPF prog-id=217 op=LOAD Jan 16 18:01:53.953000 audit[4258]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000106168 a2=98 a3=0 items=0 ppid=4244 pid=4258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:53.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534643535353132653835613834623631653262366630326432323830 Jan 16 18:01:53.953000 audit: BPF prog-id=217 op=UNLOAD Jan 16 18:01:53.953000 audit[4258]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4244 pid=4258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:53.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534643535353132653835613834623631653262366630326432323830 Jan 16 18:01:53.953000 audit: BPF prog-id=216 op=UNLOAD Jan 16 18:01:53.953000 audit[4258]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4244 pid=4258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:53.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534643535353132653835613834623631653262366630326432323830 Jan 16 18:01:53.953000 audit: BPF prog-id=218 op=LOAD Jan 16 18:01:53.953000 audit[4258]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000106648 a2=98 a3=0 items=0 ppid=4244 pid=4258 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:53.953000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534643535353132653835613834623631653262366630326432323830 Jan 16 18:01:53.962533 containerd[1571]: 2026-01-16 18:01:53.634 [INFO][4184] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4580--0--0--p--e4bb445d88-k8s-calico--apiserver--66b59648fd--pnwsh-eth0 calico-apiserver-66b59648fd- calico-apiserver 32aa7e73-83e3-4413-b798-e52a9caaa69f 841 0 2026-01-16 18:01:23 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:66b59648fd projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4580-0-0-p-e4bb445d88 calico-apiserver-66b59648fd-pnwsh eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali89bacfe506a [] [] }} ContainerID="f4674f1604445eefa48efc39bb511139b411e18e42a43e87e03a88260a0aa5f8" Namespace="calico-apiserver" Pod="calico-apiserver-66b59648fd-pnwsh" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-calico--apiserver--66b59648fd--pnwsh-" Jan 16 18:01:53.962533 containerd[1571]: 2026-01-16 18:01:53.635 [INFO][4184] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="f4674f1604445eefa48efc39bb511139b411e18e42a43e87e03a88260a0aa5f8" Namespace="calico-apiserver" Pod="calico-apiserver-66b59648fd-pnwsh" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-calico--apiserver--66b59648fd--pnwsh-eth0" Jan 16 18:01:53.962533 containerd[1571]: 2026-01-16 18:01:53.714 [INFO][4215] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="f4674f1604445eefa48efc39bb511139b411e18e42a43e87e03a88260a0aa5f8" HandleID="k8s-pod-network.f4674f1604445eefa48efc39bb511139b411e18e42a43e87e03a88260a0aa5f8" Workload="ci--4580--0--0--p--e4bb445d88-k8s-calico--apiserver--66b59648fd--pnwsh-eth0" Jan 16 18:01:53.962533 containerd[1571]: 2026-01-16 18:01:53.714 [INFO][4215] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="f4674f1604445eefa48efc39bb511139b411e18e42a43e87e03a88260a0aa5f8" HandleID="k8s-pod-network.f4674f1604445eefa48efc39bb511139b411e18e42a43e87e03a88260a0aa5f8" Workload="ci--4580--0--0--p--e4bb445d88-k8s-calico--apiserver--66b59648fd--pnwsh-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d2fe0), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4580-0-0-p-e4bb445d88", "pod":"calico-apiserver-66b59648fd-pnwsh", "timestamp":"2026-01-16 18:01:53.714147719 +0000 UTC"}, Hostname:"ci-4580-0-0-p-e4bb445d88", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 18:01:53.962533 containerd[1571]: 2026-01-16 18:01:53.714 [INFO][4215] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 18:01:53.962533 containerd[1571]: 2026-01-16 18:01:53.755 [INFO][4215] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 18:01:53.962533 containerd[1571]: 2026-01-16 18:01:53.755 [INFO][4215] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4580-0-0-p-e4bb445d88' Jan 16 18:01:53.962533 containerd[1571]: 2026-01-16 18:01:53.802 [INFO][4215] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.f4674f1604445eefa48efc39bb511139b411e18e42a43e87e03a88260a0aa5f8" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:53.962533 containerd[1571]: 2026-01-16 18:01:53.819 [INFO][4215] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:53.962533 containerd[1571]: 2026-01-16 18:01:53.850 [INFO][4215] ipam/ipam.go 511: Trying affinity for 192.168.122.0/26 host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:53.962533 containerd[1571]: 2026-01-16 18:01:53.856 [INFO][4215] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.0/26 host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:53.962533 containerd[1571]: 2026-01-16 18:01:53.875 [INFO][4215] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.0/26 host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:53.962533 containerd[1571]: 2026-01-16 18:01:53.876 [INFO][4215] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.0/26 handle="k8s-pod-network.f4674f1604445eefa48efc39bb511139b411e18e42a43e87e03a88260a0aa5f8" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:53.962533 containerd[1571]: 2026-01-16 18:01:53.879 [INFO][4215] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.f4674f1604445eefa48efc39bb511139b411e18e42a43e87e03a88260a0aa5f8 Jan 16 18:01:53.962533 containerd[1571]: 2026-01-16 18:01:53.892 [INFO][4215] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.0/26 handle="k8s-pod-network.f4674f1604445eefa48efc39bb511139b411e18e42a43e87e03a88260a0aa5f8" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:53.962533 containerd[1571]: 2026-01-16 18:01:53.906 [INFO][4215] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.4/26] block=192.168.122.0/26 handle="k8s-pod-network.f4674f1604445eefa48efc39bb511139b411e18e42a43e87e03a88260a0aa5f8" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:53.962533 containerd[1571]: 2026-01-16 18:01:53.907 [INFO][4215] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.4/26] handle="k8s-pod-network.f4674f1604445eefa48efc39bb511139b411e18e42a43e87e03a88260a0aa5f8" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:53.962533 containerd[1571]: 2026-01-16 18:01:53.907 [INFO][4215] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 18:01:53.962533 containerd[1571]: 2026-01-16 18:01:53.907 [INFO][4215] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.4/26] IPv6=[] ContainerID="f4674f1604445eefa48efc39bb511139b411e18e42a43e87e03a88260a0aa5f8" HandleID="k8s-pod-network.f4674f1604445eefa48efc39bb511139b411e18e42a43e87e03a88260a0aa5f8" Workload="ci--4580--0--0--p--e4bb445d88-k8s-calico--apiserver--66b59648fd--pnwsh-eth0" Jan 16 18:01:53.964105 containerd[1571]: 2026-01-16 18:01:53.912 [INFO][4184] cni-plugin/k8s.go 418: Populated endpoint ContainerID="f4674f1604445eefa48efc39bb511139b411e18e42a43e87e03a88260a0aa5f8" Namespace="calico-apiserver" Pod="calico-apiserver-66b59648fd-pnwsh" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-calico--apiserver--66b59648fd--pnwsh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580--0--0--p--e4bb445d88-k8s-calico--apiserver--66b59648fd--pnwsh-eth0", GenerateName:"calico-apiserver-66b59648fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"32aa7e73-83e3-4413-b798-e52a9caaa69f", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 1, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66b59648fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580-0-0-p-e4bb445d88", ContainerID:"", Pod:"calico-apiserver-66b59648fd-pnwsh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89bacfe506a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:01:53.964105 containerd[1571]: 2026-01-16 18:01:53.915 [INFO][4184] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.4/32] ContainerID="f4674f1604445eefa48efc39bb511139b411e18e42a43e87e03a88260a0aa5f8" Namespace="calico-apiserver" Pod="calico-apiserver-66b59648fd-pnwsh" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-calico--apiserver--66b59648fd--pnwsh-eth0" Jan 16 18:01:53.964105 containerd[1571]: 2026-01-16 18:01:53.916 [INFO][4184] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali89bacfe506a ContainerID="f4674f1604445eefa48efc39bb511139b411e18e42a43e87e03a88260a0aa5f8" Namespace="calico-apiserver" Pod="calico-apiserver-66b59648fd-pnwsh" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-calico--apiserver--66b59648fd--pnwsh-eth0" Jan 16 18:01:53.964105 containerd[1571]: 2026-01-16 18:01:53.930 [INFO][4184] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="f4674f1604445eefa48efc39bb511139b411e18e42a43e87e03a88260a0aa5f8" Namespace="calico-apiserver" Pod="calico-apiserver-66b59648fd-pnwsh" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-calico--apiserver--66b59648fd--pnwsh-eth0" Jan 16 18:01:53.964105 containerd[1571]: 2026-01-16 18:01:53.932 [INFO][4184] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="f4674f1604445eefa48efc39bb511139b411e18e42a43e87e03a88260a0aa5f8" Namespace="calico-apiserver" Pod="calico-apiserver-66b59648fd-pnwsh" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-calico--apiserver--66b59648fd--pnwsh-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580--0--0--p--e4bb445d88-k8s-calico--apiserver--66b59648fd--pnwsh-eth0", GenerateName:"calico-apiserver-66b59648fd-", Namespace:"calico-apiserver", SelfLink:"", UID:"32aa7e73-83e3-4413-b798-e52a9caaa69f", ResourceVersion:"841", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 1, 23, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"66b59648fd", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580-0-0-p-e4bb445d88", ContainerID:"f4674f1604445eefa48efc39bb511139b411e18e42a43e87e03a88260a0aa5f8", Pod:"calico-apiserver-66b59648fd-pnwsh", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.122.4/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali89bacfe506a", MAC:"ee:00:e6:bc:e5:c5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:01:53.964105 containerd[1571]: 2026-01-16 18:01:53.956 [INFO][4184] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="f4674f1604445eefa48efc39bb511139b411e18e42a43e87e03a88260a0aa5f8" Namespace="calico-apiserver" Pod="calico-apiserver-66b59648fd-pnwsh" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-calico--apiserver--66b59648fd--pnwsh-eth0" Jan 16 18:01:53.992000 audit[4288]: NETFILTER_CFG table=filter:132 family=2 entries=46 op=nft_register_chain pid=4288 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:01:53.992000 audit[4288]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=23740 a0=3 a1=fffff547ae70 a2=0 a3=ffff95387fa8 items=0 ppid=3859 pid=4288 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:53.992000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:01:54.010558 containerd[1571]: time="2026-01-16T18:01:54.010329800Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-dq7g7,Uid:b5830553-7e67-4092-9e5b-09dd5a3e6768,Namespace:kube-system,Attempt:0,} returns sandbox id \"e4d55512e85a84b61e2b6f02d22809e69a3fa62be533df755eeaff7f1549c2ac\"" Jan 16 18:01:54.022146 containerd[1571]: time="2026-01-16T18:01:54.021931561Z" level=info msg="CreateContainer within sandbox \"e4d55512e85a84b61e2b6f02d22809e69a3fa62be533df755eeaff7f1549c2ac\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 18:01:54.025834 containerd[1571]: time="2026-01-16T18:01:54.025767013Z" level=info msg="connecting to shim f4674f1604445eefa48efc39bb511139b411e18e42a43e87e03a88260a0aa5f8" address="unix:///run/containerd/s/56af7ff90f11d1d1ae73fe3569e99edb73cb4585772715a7321aac2108e0c4c4" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:01:54.050689 containerd[1571]: time="2026-01-16T18:01:54.050635249Z" level=info msg="Container 3e44f0a748e44f7fac79878f20cb69ba2e9582a09a523979e8cd1c9739c8f3d3: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:01:54.059000 audit[4325]: NETFILTER_CFG table=filter:133 family=2 entries=45 op=nft_register_chain pid=4325 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:01:54.059000 audit[4325]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24264 a0=3 a1=fffffd79a770 a2=0 a3=ffff86d75fa8 items=0 ppid=3859 pid=4325 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.059000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:01:54.063856 systemd-networkd[1471]: calic19a81e8faa: Link UP Jan 16 18:01:54.069320 systemd-networkd[1471]: calic19a81e8faa: Gained carrier Jan 16 18:01:54.074501 containerd[1571]: time="2026-01-16T18:01:54.074097823Z" level=info msg="CreateContainer within sandbox \"e4d55512e85a84b61e2b6f02d22809e69a3fa62be533df755eeaff7f1549c2ac\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"3e44f0a748e44f7fac79878f20cb69ba2e9582a09a523979e8cd1c9739c8f3d3\"" Jan 16 18:01:54.077654 containerd[1571]: time="2026-01-16T18:01:54.077084837Z" level=info msg="StartContainer for \"3e44f0a748e44f7fac79878f20cb69ba2e9582a09a523979e8cd1c9739c8f3d3\"" Jan 16 18:01:54.080797 containerd[1571]: time="2026-01-16T18:01:54.080731560Z" level=info msg="connecting to shim 3e44f0a748e44f7fac79878f20cb69ba2e9582a09a523979e8cd1c9739c8f3d3" address="unix:///run/containerd/s/ce3cd74fdcf4d4b1f2bf35ab93d60ece10a369cb48c5d476615f39105893a49f" protocol=ttrpc version=3 Jan 16 18:01:54.105135 systemd[1]: Started cri-containerd-f4674f1604445eefa48efc39bb511139b411e18e42a43e87e03a88260a0aa5f8.scope - libcontainer container f4674f1604445eefa48efc39bb511139b411e18e42a43e87e03a88260a0aa5f8. Jan 16 18:01:54.106016 containerd[1571]: 2026-01-16 18:01:53.618 [INFO][4171] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4580--0--0--p--e4bb445d88-k8s-coredns--668d6bf9bc--flqcp-eth0 coredns-668d6bf9bc- kube-system 99950396-08ab-48b3-8770-e8de9a7d70c4 843 0 2026-01-16 18:01:11 +0000 UTC map[k8s-app:kube-dns pod-template-hash:668d6bf9bc projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4580-0-0-p-e4bb445d88 coredns-668d6bf9bc-flqcp eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] calic19a81e8faa [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="122f586ba69b462d53856ad094b797ef1b0c003bea364a608aa8c89b90a44be2" Namespace="kube-system" Pod="coredns-668d6bf9bc-flqcp" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-coredns--668d6bf9bc--flqcp-" Jan 16 18:01:54.106016 containerd[1571]: 2026-01-16 18:01:53.619 [INFO][4171] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="122f586ba69b462d53856ad094b797ef1b0c003bea364a608aa8c89b90a44be2" Namespace="kube-system" Pod="coredns-668d6bf9bc-flqcp" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-coredns--668d6bf9bc--flqcp-eth0" Jan 16 18:01:54.106016 containerd[1571]: 2026-01-16 18:01:53.720 [INFO][4208] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="122f586ba69b462d53856ad094b797ef1b0c003bea364a608aa8c89b90a44be2" HandleID="k8s-pod-network.122f586ba69b462d53856ad094b797ef1b0c003bea364a608aa8c89b90a44be2" Workload="ci--4580--0--0--p--e4bb445d88-k8s-coredns--668d6bf9bc--flqcp-eth0" Jan 16 18:01:54.106016 containerd[1571]: 2026-01-16 18:01:53.720 [INFO][4208] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="122f586ba69b462d53856ad094b797ef1b0c003bea364a608aa8c89b90a44be2" HandleID="k8s-pod-network.122f586ba69b462d53856ad094b797ef1b0c003bea364a608aa8c89b90a44be2" Workload="ci--4580--0--0--p--e4bb445d88-k8s-coredns--668d6bf9bc--flqcp-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000103850), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4580-0-0-p-e4bb445d88", "pod":"coredns-668d6bf9bc-flqcp", "timestamp":"2026-01-16 18:01:53.720119035 +0000 UTC"}, Hostname:"ci-4580-0-0-p-e4bb445d88", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 18:01:54.106016 containerd[1571]: 2026-01-16 18:01:53.720 [INFO][4208] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 18:01:54.106016 containerd[1571]: 2026-01-16 18:01:53.907 [INFO][4208] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 18:01:54.106016 containerd[1571]: 2026-01-16 18:01:53.907 [INFO][4208] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4580-0-0-p-e4bb445d88' Jan 16 18:01:54.106016 containerd[1571]: 2026-01-16 18:01:53.938 [INFO][4208] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.122f586ba69b462d53856ad094b797ef1b0c003bea364a608aa8c89b90a44be2" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:54.106016 containerd[1571]: 2026-01-16 18:01:53.952 [INFO][4208] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:54.106016 containerd[1571]: 2026-01-16 18:01:53.974 [INFO][4208] ipam/ipam.go 511: Trying affinity for 192.168.122.0/26 host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:54.106016 containerd[1571]: 2026-01-16 18:01:53.979 [INFO][4208] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.0/26 host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:54.106016 containerd[1571]: 2026-01-16 18:01:53.989 [INFO][4208] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.0/26 host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:54.106016 containerd[1571]: 2026-01-16 18:01:53.989 [INFO][4208] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.0/26 handle="k8s-pod-network.122f586ba69b462d53856ad094b797ef1b0c003bea364a608aa8c89b90a44be2" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:54.106016 containerd[1571]: 2026-01-16 18:01:53.995 [INFO][4208] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.122f586ba69b462d53856ad094b797ef1b0c003bea364a608aa8c89b90a44be2 Jan 16 18:01:54.106016 containerd[1571]: 2026-01-16 18:01:54.012 [INFO][4208] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.0/26 handle="k8s-pod-network.122f586ba69b462d53856ad094b797ef1b0c003bea364a608aa8c89b90a44be2" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:54.106016 containerd[1571]: 2026-01-16 18:01:54.036 [INFO][4208] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.5/26] block=192.168.122.0/26 handle="k8s-pod-network.122f586ba69b462d53856ad094b797ef1b0c003bea364a608aa8c89b90a44be2" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:54.106016 containerd[1571]: 2026-01-16 18:01:54.036 [INFO][4208] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.5/26] handle="k8s-pod-network.122f586ba69b462d53856ad094b797ef1b0c003bea364a608aa8c89b90a44be2" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:54.106016 containerd[1571]: 2026-01-16 18:01:54.037 [INFO][4208] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 18:01:54.106016 containerd[1571]: 2026-01-16 18:01:54.037 [INFO][4208] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.5/26] IPv6=[] ContainerID="122f586ba69b462d53856ad094b797ef1b0c003bea364a608aa8c89b90a44be2" HandleID="k8s-pod-network.122f586ba69b462d53856ad094b797ef1b0c003bea364a608aa8c89b90a44be2" Workload="ci--4580--0--0--p--e4bb445d88-k8s-coredns--668d6bf9bc--flqcp-eth0" Jan 16 18:01:54.108877 containerd[1571]: 2026-01-16 18:01:54.046 [INFO][4171] cni-plugin/k8s.go 418: Populated endpoint ContainerID="122f586ba69b462d53856ad094b797ef1b0c003bea364a608aa8c89b90a44be2" Namespace="kube-system" Pod="coredns-668d6bf9bc-flqcp" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-coredns--668d6bf9bc--flqcp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580--0--0--p--e4bb445d88-k8s-coredns--668d6bf9bc--flqcp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"99950396-08ab-48b3-8770-e8de9a7d70c4", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 1, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580-0-0-p-e4bb445d88", ContainerID:"", Pod:"coredns-668d6bf9bc-flqcp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic19a81e8faa", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:01:54.108877 containerd[1571]: 2026-01-16 18:01:54.046 [INFO][4171] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.5/32] ContainerID="122f586ba69b462d53856ad094b797ef1b0c003bea364a608aa8c89b90a44be2" Namespace="kube-system" Pod="coredns-668d6bf9bc-flqcp" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-coredns--668d6bf9bc--flqcp-eth0" Jan 16 18:01:54.108877 containerd[1571]: 2026-01-16 18:01:54.046 [INFO][4171] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic19a81e8faa ContainerID="122f586ba69b462d53856ad094b797ef1b0c003bea364a608aa8c89b90a44be2" Namespace="kube-system" Pod="coredns-668d6bf9bc-flqcp" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-coredns--668d6bf9bc--flqcp-eth0" Jan 16 18:01:54.108877 containerd[1571]: 2026-01-16 18:01:54.073 [INFO][4171] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="122f586ba69b462d53856ad094b797ef1b0c003bea364a608aa8c89b90a44be2" Namespace="kube-system" Pod="coredns-668d6bf9bc-flqcp" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-coredns--668d6bf9bc--flqcp-eth0" Jan 16 18:01:54.108877 containerd[1571]: 2026-01-16 18:01:54.076 [INFO][4171] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="122f586ba69b462d53856ad094b797ef1b0c003bea364a608aa8c89b90a44be2" Namespace="kube-system" Pod="coredns-668d6bf9bc-flqcp" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-coredns--668d6bf9bc--flqcp-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580--0--0--p--e4bb445d88-k8s-coredns--668d6bf9bc--flqcp-eth0", GenerateName:"coredns-668d6bf9bc-", Namespace:"kube-system", SelfLink:"", UID:"99950396-08ab-48b3-8770-e8de9a7d70c4", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 1, 11, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"668d6bf9bc", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580-0-0-p-e4bb445d88", ContainerID:"122f586ba69b462d53856ad094b797ef1b0c003bea364a608aa8c89b90a44be2", Pod:"coredns-668d6bf9bc-flqcp", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.122.5/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"calic19a81e8faa", MAC:"b6:4a:1d:4d:90:02", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:01:54.108877 containerd[1571]: 2026-01-16 18:01:54.101 [INFO][4171] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="122f586ba69b462d53856ad094b797ef1b0c003bea364a608aa8c89b90a44be2" Namespace="kube-system" Pod="coredns-668d6bf9bc-flqcp" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-coredns--668d6bf9bc--flqcp-eth0" Jan 16 18:01:54.142565 systemd[1]: Started cri-containerd-3e44f0a748e44f7fac79878f20cb69ba2e9582a09a523979e8cd1c9739c8f3d3.scope - libcontainer container 3e44f0a748e44f7fac79878f20cb69ba2e9582a09a523979e8cd1c9739c8f3d3. Jan 16 18:01:54.157232 containerd[1571]: time="2026-01-16T18:01:54.157181553Z" level=info msg="connecting to shim 122f586ba69b462d53856ad094b797ef1b0c003bea364a608aa8c89b90a44be2" address="unix:///run/containerd/s/b7f63d6137b78c86baa345b79f90a43a63391a4fbc5cbaa8efa49416cfd89f24" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:01:54.177000 audit: BPF prog-id=219 op=LOAD Jan 16 18:01:54.180000 audit: BPF prog-id=220 op=LOAD Jan 16 18:01:54.180000 audit[4332]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4244 pid=4332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365343466306137343865343466376661633739383738663230636236 Jan 16 18:01:54.180000 audit: BPF prog-id=220 op=UNLOAD Jan 16 18:01:54.180000 audit[4332]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4244 pid=4332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365343466306137343865343466376661633739383738663230636236 Jan 16 18:01:54.181000 audit: BPF prog-id=221 op=LOAD Jan 16 18:01:54.181000 audit[4332]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4244 pid=4332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365343466306137343865343466376661633739383738663230636236 Jan 16 18:01:54.181000 audit: BPF prog-id=222 op=LOAD Jan 16 18:01:54.181000 audit[4332]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4244 pid=4332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365343466306137343865343466376661633739383738663230636236 Jan 16 18:01:54.181000 audit: BPF prog-id=222 op=UNLOAD Jan 16 18:01:54.181000 audit[4332]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4244 pid=4332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365343466306137343865343466376661633739383738663230636236 Jan 16 18:01:54.181000 audit: BPF prog-id=221 op=UNLOAD Jan 16 18:01:54.181000 audit[4332]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4244 pid=4332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365343466306137343865343466376661633739383738663230636236 Jan 16 18:01:54.181000 audit: BPF prog-id=223 op=LOAD Jan 16 18:01:54.181000 audit[4332]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=4244 pid=4332 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.181000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365343466306137343865343466376661633739383738663230636236 Jan 16 18:01:54.185000 audit: BPF prog-id=224 op=LOAD Jan 16 18:01:54.186000 audit: BPF prog-id=225 op=LOAD Jan 16 18:01:54.186000 audit[4318]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4306 pid=4318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.186000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634363734663136303434343565656661343865666333396262353131 Jan 16 18:01:54.186000 audit: BPF prog-id=225 op=UNLOAD Jan 16 18:01:54.186000 audit[4318]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4306 pid=4318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.186000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634363734663136303434343565656661343865666333396262353131 Jan 16 18:01:54.186000 audit: BPF prog-id=226 op=LOAD Jan 16 18:01:54.186000 audit[4318]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4306 pid=4318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.186000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634363734663136303434343565656661343865666333396262353131 Jan 16 18:01:54.186000 audit: BPF prog-id=227 op=LOAD Jan 16 18:01:54.186000 audit[4318]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4306 pid=4318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.186000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634363734663136303434343565656661343865666333396262353131 Jan 16 18:01:54.186000 audit: BPF prog-id=227 op=UNLOAD Jan 16 18:01:54.186000 audit[4318]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4306 pid=4318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.186000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634363734663136303434343565656661343865666333396262353131 Jan 16 18:01:54.186000 audit: BPF prog-id=226 op=UNLOAD Jan 16 18:01:54.186000 audit[4318]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4306 pid=4318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.186000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634363734663136303434343565656661343865666333396262353131 Jan 16 18:01:54.186000 audit: BPF prog-id=228 op=LOAD Jan 16 18:01:54.186000 audit[4318]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4306 pid=4318 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.186000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6634363734663136303434343565656661343865666333396262353131 Jan 16 18:01:54.207000 audit[4386]: NETFILTER_CFG table=filter:134 family=2 entries=44 op=nft_register_chain pid=4386 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:01:54.207000 audit[4386]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=21532 a0=3 a1=ffffe8050640 a2=0 a3=ffff93361fa8 items=0 ppid=3859 pid=4386 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.207000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:01:54.215369 systemd[1]: Started cri-containerd-122f586ba69b462d53856ad094b797ef1b0c003bea364a608aa8c89b90a44be2.scope - libcontainer container 122f586ba69b462d53856ad094b797ef1b0c003bea364a608aa8c89b90a44be2. Jan 16 18:01:54.241000 audit: BPF prog-id=229 op=LOAD Jan 16 18:01:54.243000 audit: BPF prog-id=230 op=LOAD Jan 16 18:01:54.243000 audit[4385]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4368 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.243000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132326635383662613639623436326435333835366164303934623739 Jan 16 18:01:54.244000 audit: BPF prog-id=230 op=UNLOAD Jan 16 18:01:54.244000 audit[4385]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4368 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.244000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132326635383662613639623436326435333835366164303934623739 Jan 16 18:01:54.245000 audit: BPF prog-id=231 op=LOAD Jan 16 18:01:54.245000 audit[4385]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4368 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.245000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132326635383662613639623436326435333835366164303934623739 Jan 16 18:01:54.245000 audit: BPF prog-id=232 op=LOAD Jan 16 18:01:54.245000 audit[4385]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4368 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.245000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132326635383662613639623436326435333835366164303934623739 Jan 16 18:01:54.246000 audit: BPF prog-id=232 op=UNLOAD Jan 16 18:01:54.246000 audit[4385]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4368 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.246000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132326635383662613639623436326435333835366164303934623739 Jan 16 18:01:54.246000 audit: BPF prog-id=231 op=UNLOAD Jan 16 18:01:54.246000 audit[4385]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4368 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.246000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132326635383662613639623436326435333835366164303934623739 Jan 16 18:01:54.246000 audit: BPF prog-id=233 op=LOAD Jan 16 18:01:54.246000 audit[4385]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4368 pid=4385 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.246000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3132326635383662613639623436326435333835366164303934623739 Jan 16 18:01:54.274251 containerd[1571]: time="2026-01-16T18:01:54.274135083Z" level=info msg="StartContainer for \"3e44f0a748e44f7fac79878f20cb69ba2e9582a09a523979e8cd1c9739c8f3d3\" returns successfully" Jan 16 18:01:54.291946 containerd[1571]: time="2026-01-16T18:01:54.291880720Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-66b59648fd-pnwsh,Uid:32aa7e73-83e3-4413-b798-e52a9caaa69f,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"f4674f1604445eefa48efc39bb511139b411e18e42a43e87e03a88260a0aa5f8\"" Jan 16 18:01:54.295285 containerd[1571]: time="2026-01-16T18:01:54.294435914Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 18:01:54.320686 containerd[1571]: time="2026-01-16T18:01:54.319828774Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-668d6bf9bc-flqcp,Uid:99950396-08ab-48b3-8770-e8de9a7d70c4,Namespace:kube-system,Attempt:0,} returns sandbox id \"122f586ba69b462d53856ad094b797ef1b0c003bea364a608aa8c89b90a44be2\"" Jan 16 18:01:54.326050 containerd[1571]: time="2026-01-16T18:01:54.326000852Z" level=info msg="CreateContainer within sandbox \"122f586ba69b462d53856ad094b797ef1b0c003bea364a608aa8c89b90a44be2\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 16 18:01:54.339188 containerd[1571]: time="2026-01-16T18:01:54.339135321Z" level=info msg="Container 4316566fc83e5424d7a2afdb07d2f795aec3f6ba03e6a4ea55e7e632e1dfbb7f: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:01:54.355385 containerd[1571]: time="2026-01-16T18:01:54.355272166Z" level=info msg="CreateContainer within sandbox \"122f586ba69b462d53856ad094b797ef1b0c003bea364a608aa8c89b90a44be2\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"4316566fc83e5424d7a2afdb07d2f795aec3f6ba03e6a4ea55e7e632e1dfbb7f\"" Jan 16 18:01:54.357658 containerd[1571]: time="2026-01-16T18:01:54.357121809Z" level=info msg="StartContainer for \"4316566fc83e5424d7a2afdb07d2f795aec3f6ba03e6a4ea55e7e632e1dfbb7f\"" Jan 16 18:01:54.361430 containerd[1571]: time="2026-01-16T18:01:54.361361919Z" level=info msg="connecting to shim 4316566fc83e5424d7a2afdb07d2f795aec3f6ba03e6a4ea55e7e632e1dfbb7f" address="unix:///run/containerd/s/b7f63d6137b78c86baa345b79f90a43a63391a4fbc5cbaa8efa49416cfd89f24" protocol=ttrpc version=3 Jan 16 18:01:54.393189 systemd[1]: Started cri-containerd-4316566fc83e5424d7a2afdb07d2f795aec3f6ba03e6a4ea55e7e632e1dfbb7f.scope - libcontainer container 4316566fc83e5424d7a2afdb07d2f795aec3f6ba03e6a4ea55e7e632e1dfbb7f. Jan 16 18:01:54.408000 audit: BPF prog-id=234 op=LOAD Jan 16 18:01:54.410000 audit: BPF prog-id=235 op=LOAD Jan 16 18:01:54.410000 audit[4431]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4368 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433313635363666633833653534323464376132616664623037643266 Jan 16 18:01:54.410000 audit: BPF prog-id=235 op=UNLOAD Jan 16 18:01:54.410000 audit[4431]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4368 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433313635363666633833653534323464376132616664623037643266 Jan 16 18:01:54.410000 audit: BPF prog-id=236 op=LOAD Jan 16 18:01:54.410000 audit[4431]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4368 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433313635363666633833653534323464376132616664623037643266 Jan 16 18:01:54.410000 audit: BPF prog-id=237 op=LOAD Jan 16 18:01:54.410000 audit[4431]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4368 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433313635363666633833653534323464376132616664623037643266 Jan 16 18:01:54.410000 audit: BPF prog-id=237 op=UNLOAD Jan 16 18:01:54.410000 audit[4431]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4368 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433313635363666633833653534323464376132616664623037643266 Jan 16 18:01:54.410000 audit: BPF prog-id=236 op=UNLOAD Jan 16 18:01:54.410000 audit[4431]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4368 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433313635363666633833653534323464376132616664623037643266 Jan 16 18:01:54.410000 audit: BPF prog-id=238 op=LOAD Jan 16 18:01:54.410000 audit[4431]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4368 pid=4431 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.410000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3433313635363666633833653534323464376132616664623037643266 Jan 16 18:01:54.440832 containerd[1571]: time="2026-01-16T18:01:54.440789765Z" level=info msg="StartContainer for \"4316566fc83e5424d7a2afdb07d2f795aec3f6ba03e6a4ea55e7e632e1dfbb7f\" returns successfully" Jan 16 18:01:54.644105 systemd-networkd[1471]: cali9fdc2219f9f: Gained IPv6LL Jan 16 18:01:54.646702 containerd[1571]: time="2026-01-16T18:01:54.646521801Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:01:54.655339 containerd[1571]: time="2026-01-16T18:01:54.655159069Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 18:01:54.655339 containerd[1571]: time="2026-01-16T18:01:54.655180830Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 18:01:54.655684 kubelet[2796]: E0116 18:01:54.655491 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:01:54.655684 kubelet[2796]: E0116 18:01:54.655560 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:01:54.657458 kubelet[2796]: E0116 18:01:54.657196 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t2ks5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66b59648fd-pnwsh_calico-apiserver(32aa7e73-83e3-4413-b798-e52a9caaa69f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 18:01:54.658949 kubelet[2796]: E0116 18:01:54.658912 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-pnwsh" podUID="32aa7e73-83e3-4413-b798-e52a9caaa69f" Jan 16 18:01:54.786392 kubelet[2796]: E0116 18:01:54.786201 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-pnwsh" podUID="32aa7e73-83e3-4413-b798-e52a9caaa69f" Jan 16 18:01:54.794937 kubelet[2796]: E0116 18:01:54.794855 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-qhvz8" podUID="9b676f4f-1816-4b2f-88ec-512b756c1b31" Jan 16 18:01:54.833524 kubelet[2796]: I0116 18:01:54.833442 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-flqcp" podStartSLOduration=43.833421712 podStartE2EDuration="43.833421712s" podCreationTimestamp="2026-01-16 18:01:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 18:01:54.808285583 +0000 UTC m=+50.425791647" watchObservedRunningTime="2026-01-16 18:01:54.833421712 +0000 UTC m=+50.450927776" Jan 16 18:01:54.838000 audit[4466]: NETFILTER_CFG table=filter:135 family=2 entries=20 op=nft_register_rule pid=4466 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:54.838000 audit[4466]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=ffffca4dd680 a2=0 a3=1 items=0 ppid=2936 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.838000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:54.841000 audit[4466]: NETFILTER_CFG table=nat:136 family=2 entries=14 op=nft_register_rule pid=4466 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:54.841000 audit[4466]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=3468 a0=3 a1=ffffca4dd680 a2=0 a3=1 items=0 ppid=2936 pid=4466 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.841000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:54.857950 kubelet[2796]: I0116 18:01:54.857845 2796 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-668d6bf9bc-dq7g7" podStartSLOduration=43.857825007 podStartE2EDuration="43.857825007s" podCreationTimestamp="2026-01-16 18:01:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-16 18:01:54.856960129 +0000 UTC m=+50.474466153" watchObservedRunningTime="2026-01-16 18:01:54.857825007 +0000 UTC m=+50.475331071" Jan 16 18:01:54.896000 audit[4468]: NETFILTER_CFG table=filter:137 family=2 entries=17 op=nft_register_rule pid=4468 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:54.896000 audit[4468]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd09d4a40 a2=0 a3=1 items=0 ppid=2936 pid=4468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.896000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:54.901000 audit[4468]: NETFILTER_CFG table=nat:138 family=2 entries=35 op=nft_register_chain pid=4468 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:54.901000 audit[4468]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14196 a0=3 a1=ffffd09d4a40 a2=0 a3=1 items=0 ppid=2936 pid=4468 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:54.901000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:54.963835 systemd-networkd[1471]: cali89bacfe506a: Gained IPv6LL Jan 16 18:01:55.477201 systemd-networkd[1471]: cali5b424fa3b78: Gained IPv6LL Jan 16 18:01:55.797022 kubelet[2796]: E0116 18:01:55.796897 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-pnwsh" podUID="32aa7e73-83e3-4413-b798-e52a9caaa69f" Jan 16 18:01:55.921000 audit[4470]: NETFILTER_CFG table=filter:139 family=2 entries=14 op=nft_register_rule pid=4470 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:55.921000 audit[4470]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcfad7f00 a2=0 a3=1 items=0 ppid=2936 pid=4470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:55.921000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:55.937000 audit[4470]: NETFILTER_CFG table=nat:140 family=2 entries=56 op=nft_register_chain pid=4470 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:55.937000 audit[4470]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19860 a0=3 a1=ffffcfad7f00 a2=0 a3=1 items=0 ppid=2936 pid=4470 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:55.937000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:56.053126 systemd-networkd[1471]: calic19a81e8faa: Gained IPv6LL Jan 16 18:01:56.501006 containerd[1571]: time="2026-01-16T18:01:56.500943789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78d49b4987-blz52,Uid:5fef7a00-b60c-4abb-8c75-26be1bbddcd8,Namespace:calico-system,Attempt:0,}" Jan 16 18:01:56.506140 containerd[1571]: time="2026-01-16T18:01:56.506068487Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lnrx8,Uid:30371a55-b2a2-4dfe-86cf-86b9aadb477e,Namespace:calico-system,Attempt:0,}" Jan 16 18:01:56.513267 containerd[1571]: time="2026-01-16T18:01:56.508803563Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5gfbb,Uid:91c39312-4785-4237-a846-6ef23d8c4ee9,Namespace:calico-system,Attempt:0,}" Jan 16 18:01:56.775953 systemd-networkd[1471]: cali19b0814fc7a: Link UP Jan 16 18:01:56.776306 systemd-networkd[1471]: cali19b0814fc7a: Gained carrier Jan 16 18:01:56.817613 containerd[1571]: 2026-01-16 18:01:56.623 [INFO][4472] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4580--0--0--p--e4bb445d88-k8s-csi--node--driver--lnrx8-eth0 csi-node-driver- calico-system 30371a55-b2a2-4dfe-86cf-86b9aadb477e 741 0 2026-01-16 18:01:31 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4580-0-0-p-e4bb445d88 csi-node-driver-lnrx8 eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali19b0814fc7a [] [] }} ContainerID="a086feb9341adfaea5d196924b946f4c1008179c7946aee924a85e8afa0f6789" Namespace="calico-system" Pod="csi-node-driver-lnrx8" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-csi--node--driver--lnrx8-" Jan 16 18:01:56.817613 containerd[1571]: 2026-01-16 18:01:56.623 [INFO][4472] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="a086feb9341adfaea5d196924b946f4c1008179c7946aee924a85e8afa0f6789" Namespace="calico-system" Pod="csi-node-driver-lnrx8" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-csi--node--driver--lnrx8-eth0" Jan 16 18:01:56.817613 containerd[1571]: 2026-01-16 18:01:56.670 [INFO][4508] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="a086feb9341adfaea5d196924b946f4c1008179c7946aee924a85e8afa0f6789" HandleID="k8s-pod-network.a086feb9341adfaea5d196924b946f4c1008179c7946aee924a85e8afa0f6789" Workload="ci--4580--0--0--p--e4bb445d88-k8s-csi--node--driver--lnrx8-eth0" Jan 16 18:01:56.817613 containerd[1571]: 2026-01-16 18:01:56.671 [INFO][4508] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="a086feb9341adfaea5d196924b946f4c1008179c7946aee924a85e8afa0f6789" HandleID="k8s-pod-network.a086feb9341adfaea5d196924b946f4c1008179c7946aee924a85e8afa0f6789" Workload="ci--4580--0--0--p--e4bb445d88-k8s-csi--node--driver--lnrx8-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d35a0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4580-0-0-p-e4bb445d88", "pod":"csi-node-driver-lnrx8", "timestamp":"2026-01-16 18:01:56.670817826 +0000 UTC"}, Hostname:"ci-4580-0-0-p-e4bb445d88", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 18:01:56.817613 containerd[1571]: 2026-01-16 18:01:56.671 [INFO][4508] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 18:01:56.817613 containerd[1571]: 2026-01-16 18:01:56.671 [INFO][4508] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 18:01:56.817613 containerd[1571]: 2026-01-16 18:01:56.671 [INFO][4508] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4580-0-0-p-e4bb445d88' Jan 16 18:01:56.817613 containerd[1571]: 2026-01-16 18:01:56.691 [INFO][4508] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.a086feb9341adfaea5d196924b946f4c1008179c7946aee924a85e8afa0f6789" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:56.817613 containerd[1571]: 2026-01-16 18:01:56.707 [INFO][4508] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:56.817613 containerd[1571]: 2026-01-16 18:01:56.730 [INFO][4508] ipam/ipam.go 511: Trying affinity for 192.168.122.0/26 host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:56.817613 containerd[1571]: 2026-01-16 18:01:56.735 [INFO][4508] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.0/26 host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:56.817613 containerd[1571]: 2026-01-16 18:01:56.741 [INFO][4508] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.0/26 host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:56.817613 containerd[1571]: 2026-01-16 18:01:56.742 [INFO][4508] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.0/26 handle="k8s-pod-network.a086feb9341adfaea5d196924b946f4c1008179c7946aee924a85e8afa0f6789" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:56.817613 containerd[1571]: 2026-01-16 18:01:56.744 [INFO][4508] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.a086feb9341adfaea5d196924b946f4c1008179c7946aee924a85e8afa0f6789 Jan 16 18:01:56.817613 containerd[1571]: 2026-01-16 18:01:56.752 [INFO][4508] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.0/26 handle="k8s-pod-network.a086feb9341adfaea5d196924b946f4c1008179c7946aee924a85e8afa0f6789" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:56.817613 containerd[1571]: 2026-01-16 18:01:56.762 [INFO][4508] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.6/26] block=192.168.122.0/26 handle="k8s-pod-network.a086feb9341adfaea5d196924b946f4c1008179c7946aee924a85e8afa0f6789" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:56.817613 containerd[1571]: 2026-01-16 18:01:56.763 [INFO][4508] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.6/26] handle="k8s-pod-network.a086feb9341adfaea5d196924b946f4c1008179c7946aee924a85e8afa0f6789" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:56.817613 containerd[1571]: 2026-01-16 18:01:56.763 [INFO][4508] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 18:01:56.817613 containerd[1571]: 2026-01-16 18:01:56.763 [INFO][4508] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.6/26] IPv6=[] ContainerID="a086feb9341adfaea5d196924b946f4c1008179c7946aee924a85e8afa0f6789" HandleID="k8s-pod-network.a086feb9341adfaea5d196924b946f4c1008179c7946aee924a85e8afa0f6789" Workload="ci--4580--0--0--p--e4bb445d88-k8s-csi--node--driver--lnrx8-eth0" Jan 16 18:01:56.818242 containerd[1571]: 2026-01-16 18:01:56.767 [INFO][4472] cni-plugin/k8s.go 418: Populated endpoint ContainerID="a086feb9341adfaea5d196924b946f4c1008179c7946aee924a85e8afa0f6789" Namespace="calico-system" Pod="csi-node-driver-lnrx8" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-csi--node--driver--lnrx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580--0--0--p--e4bb445d88-k8s-csi--node--driver--lnrx8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"30371a55-b2a2-4dfe-86cf-86b9aadb477e", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 1, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580-0-0-p-e4bb445d88", ContainerID:"", Pod:"csi-node-driver-lnrx8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.122.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali19b0814fc7a", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:01:56.818242 containerd[1571]: 2026-01-16 18:01:56.767 [INFO][4472] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.6/32] ContainerID="a086feb9341adfaea5d196924b946f4c1008179c7946aee924a85e8afa0f6789" Namespace="calico-system" Pod="csi-node-driver-lnrx8" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-csi--node--driver--lnrx8-eth0" Jan 16 18:01:56.818242 containerd[1571]: 2026-01-16 18:01:56.767 [INFO][4472] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali19b0814fc7a ContainerID="a086feb9341adfaea5d196924b946f4c1008179c7946aee924a85e8afa0f6789" Namespace="calico-system" Pod="csi-node-driver-lnrx8" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-csi--node--driver--lnrx8-eth0" Jan 16 18:01:56.818242 containerd[1571]: 2026-01-16 18:01:56.775 [INFO][4472] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="a086feb9341adfaea5d196924b946f4c1008179c7946aee924a85e8afa0f6789" Namespace="calico-system" Pod="csi-node-driver-lnrx8" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-csi--node--driver--lnrx8-eth0" Jan 16 18:01:56.818242 containerd[1571]: 2026-01-16 18:01:56.785 [INFO][4472] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="a086feb9341adfaea5d196924b946f4c1008179c7946aee924a85e8afa0f6789" Namespace="calico-system" Pod="csi-node-driver-lnrx8" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-csi--node--driver--lnrx8-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580--0--0--p--e4bb445d88-k8s-csi--node--driver--lnrx8-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"30371a55-b2a2-4dfe-86cf-86b9aadb477e", ResourceVersion:"741", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 1, 31, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580-0-0-p-e4bb445d88", ContainerID:"a086feb9341adfaea5d196924b946f4c1008179c7946aee924a85e8afa0f6789", Pod:"csi-node-driver-lnrx8", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.122.6/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali19b0814fc7a", MAC:"46:fc:96:9b:a1:e8", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:01:56.818242 containerd[1571]: 2026-01-16 18:01:56.810 [INFO][4472] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="a086feb9341adfaea5d196924b946f4c1008179c7946aee924a85e8afa0f6789" Namespace="calico-system" Pod="csi-node-driver-lnrx8" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-csi--node--driver--lnrx8-eth0" Jan 16 18:01:56.858017 containerd[1571]: time="2026-01-16T18:01:56.856430495Z" level=info msg="connecting to shim a086feb9341adfaea5d196924b946f4c1008179c7946aee924a85e8afa0f6789" address="unix:///run/containerd/s/f09a9bd941f2ae728d9cda0fad0608e6adbfc3e2c82321f9a208d7b58fc3309c" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:01:56.864000 audit[4554]: NETFILTER_CFG table=filter:141 family=2 entries=52 op=nft_register_chain pid=4554 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:01:56.864000 audit[4554]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=24328 a0=3 a1=fffff8804860 a2=0 a3=ffff8b45dfa8 items=0 ppid=3859 pid=4554 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:56.864000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:01:56.890558 systemd-networkd[1471]: cali978d9b24a7f: Link UP Jan 16 18:01:56.894025 systemd-networkd[1471]: cali978d9b24a7f: Gained carrier Jan 16 18:01:56.922928 containerd[1571]: 2026-01-16 18:01:56.683 [INFO][4476] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4580--0--0--p--e4bb445d88-k8s-calico--kube--controllers--78d49b4987--blz52-eth0 calico-kube-controllers-78d49b4987- calico-system 5fef7a00-b60c-4abb-8c75-26be1bbddcd8 844 0 2026-01-16 18:01:32 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:78d49b4987 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4580-0-0-p-e4bb445d88 calico-kube-controllers-78d49b4987-blz52 eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] cali978d9b24a7f [] [] }} ContainerID="9399efa649078a2d578b9537cdba8144b42225fa410d92527e46e5648ddbfe52" Namespace="calico-system" Pod="calico-kube-controllers-78d49b4987-blz52" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-calico--kube--controllers--78d49b4987--blz52-" Jan 16 18:01:56.922928 containerd[1571]: 2026-01-16 18:01:56.683 [INFO][4476] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="9399efa649078a2d578b9537cdba8144b42225fa410d92527e46e5648ddbfe52" Namespace="calico-system" Pod="calico-kube-controllers-78d49b4987-blz52" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-calico--kube--controllers--78d49b4987--blz52-eth0" Jan 16 18:01:56.922928 containerd[1571]: 2026-01-16 18:01:56.755 [INFO][4518] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="9399efa649078a2d578b9537cdba8144b42225fa410d92527e46e5648ddbfe52" HandleID="k8s-pod-network.9399efa649078a2d578b9537cdba8144b42225fa410d92527e46e5648ddbfe52" Workload="ci--4580--0--0--p--e4bb445d88-k8s-calico--kube--controllers--78d49b4987--blz52-eth0" Jan 16 18:01:56.922928 containerd[1571]: 2026-01-16 18:01:56.756 [INFO][4518] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="9399efa649078a2d578b9537cdba8144b42225fa410d92527e46e5648ddbfe52" HandleID="k8s-pod-network.9399efa649078a2d578b9537cdba8144b42225fa410d92527e46e5648ddbfe52" Workload="ci--4580--0--0--p--e4bb445d88-k8s-calico--kube--controllers--78d49b4987--blz52-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002ab370), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4580-0-0-p-e4bb445d88", "pod":"calico-kube-controllers-78d49b4987-blz52", "timestamp":"2026-01-16 18:01:56.755910612 +0000 UTC"}, Hostname:"ci-4580-0-0-p-e4bb445d88", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 18:01:56.922928 containerd[1571]: 2026-01-16 18:01:56.756 [INFO][4518] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 18:01:56.922928 containerd[1571]: 2026-01-16 18:01:56.763 [INFO][4518] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 18:01:56.922928 containerd[1571]: 2026-01-16 18:01:56.763 [INFO][4518] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4580-0-0-p-e4bb445d88' Jan 16 18:01:56.922928 containerd[1571]: 2026-01-16 18:01:56.798 [INFO][4518] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.9399efa649078a2d578b9537cdba8144b42225fa410d92527e46e5648ddbfe52" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:56.922928 containerd[1571]: 2026-01-16 18:01:56.815 [INFO][4518] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:56.922928 containerd[1571]: 2026-01-16 18:01:56.826 [INFO][4518] ipam/ipam.go 511: Trying affinity for 192.168.122.0/26 host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:56.922928 containerd[1571]: 2026-01-16 18:01:56.832 [INFO][4518] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.0/26 host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:56.922928 containerd[1571]: 2026-01-16 18:01:56.836 [INFO][4518] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.0/26 host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:56.922928 containerd[1571]: 2026-01-16 18:01:56.836 [INFO][4518] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.0/26 handle="k8s-pod-network.9399efa649078a2d578b9537cdba8144b42225fa410d92527e46e5648ddbfe52" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:56.922928 containerd[1571]: 2026-01-16 18:01:56.846 [INFO][4518] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.9399efa649078a2d578b9537cdba8144b42225fa410d92527e46e5648ddbfe52 Jan 16 18:01:56.922928 containerd[1571]: 2026-01-16 18:01:56.853 [INFO][4518] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.0/26 handle="k8s-pod-network.9399efa649078a2d578b9537cdba8144b42225fa410d92527e46e5648ddbfe52" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:56.922928 containerd[1571]: 2026-01-16 18:01:56.874 [INFO][4518] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.7/26] block=192.168.122.0/26 handle="k8s-pod-network.9399efa649078a2d578b9537cdba8144b42225fa410d92527e46e5648ddbfe52" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:56.922928 containerd[1571]: 2026-01-16 18:01:56.875 [INFO][4518] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.7/26] handle="k8s-pod-network.9399efa649078a2d578b9537cdba8144b42225fa410d92527e46e5648ddbfe52" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:56.922928 containerd[1571]: 2026-01-16 18:01:56.875 [INFO][4518] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 18:01:56.922928 containerd[1571]: 2026-01-16 18:01:56.875 [INFO][4518] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.7/26] IPv6=[] ContainerID="9399efa649078a2d578b9537cdba8144b42225fa410d92527e46e5648ddbfe52" HandleID="k8s-pod-network.9399efa649078a2d578b9537cdba8144b42225fa410d92527e46e5648ddbfe52" Workload="ci--4580--0--0--p--e4bb445d88-k8s-calico--kube--controllers--78d49b4987--blz52-eth0" Jan 16 18:01:56.924084 containerd[1571]: 2026-01-16 18:01:56.883 [INFO][4476] cni-plugin/k8s.go 418: Populated endpoint ContainerID="9399efa649078a2d578b9537cdba8144b42225fa410d92527e46e5648ddbfe52" Namespace="calico-system" Pod="calico-kube-controllers-78d49b4987-blz52" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-calico--kube--controllers--78d49b4987--blz52-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580--0--0--p--e4bb445d88-k8s-calico--kube--controllers--78d49b4987--blz52-eth0", GenerateName:"calico-kube-controllers-78d49b4987-", Namespace:"calico-system", SelfLink:"", UID:"5fef7a00-b60c-4abb-8c75-26be1bbddcd8", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 1, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78d49b4987", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580-0-0-p-e4bb445d88", ContainerID:"", Pod:"calico-kube-controllers-78d49b4987-blz52", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.122.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali978d9b24a7f", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:01:56.924084 containerd[1571]: 2026-01-16 18:01:56.884 [INFO][4476] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.7/32] ContainerID="9399efa649078a2d578b9537cdba8144b42225fa410d92527e46e5648ddbfe52" Namespace="calico-system" Pod="calico-kube-controllers-78d49b4987-blz52" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-calico--kube--controllers--78d49b4987--blz52-eth0" Jan 16 18:01:56.924084 containerd[1571]: 2026-01-16 18:01:56.884 [INFO][4476] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali978d9b24a7f ContainerID="9399efa649078a2d578b9537cdba8144b42225fa410d92527e46e5648ddbfe52" Namespace="calico-system" Pod="calico-kube-controllers-78d49b4987-blz52" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-calico--kube--controllers--78d49b4987--blz52-eth0" Jan 16 18:01:56.924084 containerd[1571]: 2026-01-16 18:01:56.895 [INFO][4476] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="9399efa649078a2d578b9537cdba8144b42225fa410d92527e46e5648ddbfe52" Namespace="calico-system" Pod="calico-kube-controllers-78d49b4987-blz52" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-calico--kube--controllers--78d49b4987--blz52-eth0" Jan 16 18:01:56.924084 containerd[1571]: 2026-01-16 18:01:56.896 [INFO][4476] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="9399efa649078a2d578b9537cdba8144b42225fa410d92527e46e5648ddbfe52" Namespace="calico-system" Pod="calico-kube-controllers-78d49b4987-blz52" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-calico--kube--controllers--78d49b4987--blz52-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580--0--0--p--e4bb445d88-k8s-calico--kube--controllers--78d49b4987--blz52-eth0", GenerateName:"calico-kube-controllers-78d49b4987-", Namespace:"calico-system", SelfLink:"", UID:"5fef7a00-b60c-4abb-8c75-26be1bbddcd8", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 1, 32, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"78d49b4987", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580-0-0-p-e4bb445d88", ContainerID:"9399efa649078a2d578b9537cdba8144b42225fa410d92527e46e5648ddbfe52", Pod:"calico-kube-controllers-78d49b4987-blz52", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.122.7/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"cali978d9b24a7f", MAC:"f2:aa:f1:cd:3b:b5", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:01:56.924084 containerd[1571]: 2026-01-16 18:01:56.917 [INFO][4476] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="9399efa649078a2d578b9537cdba8144b42225fa410d92527e46e5648ddbfe52" Namespace="calico-system" Pod="calico-kube-controllers-78d49b4987-blz52" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-calico--kube--controllers--78d49b4987--blz52-eth0" Jan 16 18:01:56.935049 systemd[1]: Started cri-containerd-a086feb9341adfaea5d196924b946f4c1008179c7946aee924a85e8afa0f6789.scope - libcontainer container a086feb9341adfaea5d196924b946f4c1008179c7946aee924a85e8afa0f6789. Jan 16 18:01:56.986358 systemd-networkd[1471]: calic6e42d08bb5: Link UP Jan 16 18:01:56.988019 systemd-networkd[1471]: calic6e42d08bb5: Gained carrier Jan 16 18:01:57.006771 containerd[1571]: time="2026-01-16T18:01:57.006472921Z" level=info msg="connecting to shim 9399efa649078a2d578b9537cdba8144b42225fa410d92527e46e5648ddbfe52" address="unix:///run/containerd/s/fc7bf8b934a9ace01137369e94ebdc76c6db32c9af0ab734f64afd942f749d53" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:01:57.022560 containerd[1571]: 2026-01-16 18:01:56.695 [INFO][4493] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4580--0--0--p--e4bb445d88-k8s-goldmane--666569f655--5gfbb-eth0 goldmane-666569f655- calico-system 91c39312-4785-4237-a846-6ef23d8c4ee9 842 0 2026-01-16 18:01:27 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4580-0-0-p-e4bb445d88 goldmane-666569f655-5gfbb eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calic6e42d08bb5 [] [] }} ContainerID="fa3afd769a05392c532a9246ae42288f3d47a25f575ccdd27b789ed517f5289c" Namespace="calico-system" Pod="goldmane-666569f655-5gfbb" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-goldmane--666569f655--5gfbb-" Jan 16 18:01:57.022560 containerd[1571]: 2026-01-16 18:01:56.696 [INFO][4493] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="fa3afd769a05392c532a9246ae42288f3d47a25f575ccdd27b789ed517f5289c" Namespace="calico-system" Pod="goldmane-666569f655-5gfbb" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-goldmane--666569f655--5gfbb-eth0" Jan 16 18:01:57.022560 containerd[1571]: 2026-01-16 18:01:56.780 [INFO][4523] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="fa3afd769a05392c532a9246ae42288f3d47a25f575ccdd27b789ed517f5289c" HandleID="k8s-pod-network.fa3afd769a05392c532a9246ae42288f3d47a25f575ccdd27b789ed517f5289c" Workload="ci--4580--0--0--p--e4bb445d88-k8s-goldmane--666569f655--5gfbb-eth0" Jan 16 18:01:57.022560 containerd[1571]: 2026-01-16 18:01:56.780 [INFO][4523] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="fa3afd769a05392c532a9246ae42288f3d47a25f575ccdd27b789ed517f5289c" HandleID="k8s-pod-network.fa3afd769a05392c532a9246ae42288f3d47a25f575ccdd27b789ed517f5289c" Workload="ci--4580--0--0--p--e4bb445d88-k8s-goldmane--666569f655--5gfbb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400024b590), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4580-0-0-p-e4bb445d88", "pod":"goldmane-666569f655-5gfbb", "timestamp":"2026-01-16 18:01:56.780219648 +0000 UTC"}, Hostname:"ci-4580-0-0-p-e4bb445d88", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 16 18:01:57.022560 containerd[1571]: 2026-01-16 18:01:56.780 [INFO][4523] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 16 18:01:57.022560 containerd[1571]: 2026-01-16 18:01:56.875 [INFO][4523] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 16 18:01:57.022560 containerd[1571]: 2026-01-16 18:01:56.876 [INFO][4523] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4580-0-0-p-e4bb445d88' Jan 16 18:01:57.022560 containerd[1571]: 2026-01-16 18:01:56.899 [INFO][4523] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.fa3afd769a05392c532a9246ae42288f3d47a25f575ccdd27b789ed517f5289c" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:57.022560 containerd[1571]: 2026-01-16 18:01:56.920 [INFO][4523] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:57.022560 containerd[1571]: 2026-01-16 18:01:56.939 [INFO][4523] ipam/ipam.go 511: Trying affinity for 192.168.122.0/26 host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:57.022560 containerd[1571]: 2026-01-16 18:01:56.944 [INFO][4523] ipam/ipam.go 158: Attempting to load block cidr=192.168.122.0/26 host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:57.022560 containerd[1571]: 2026-01-16 18:01:56.949 [INFO][4523] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.122.0/26 host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:57.022560 containerd[1571]: 2026-01-16 18:01:56.949 [INFO][4523] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.122.0/26 handle="k8s-pod-network.fa3afd769a05392c532a9246ae42288f3d47a25f575ccdd27b789ed517f5289c" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:57.022560 containerd[1571]: 2026-01-16 18:01:56.953 [INFO][4523] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.fa3afd769a05392c532a9246ae42288f3d47a25f575ccdd27b789ed517f5289c Jan 16 18:01:57.022560 containerd[1571]: 2026-01-16 18:01:56.961 [INFO][4523] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.122.0/26 handle="k8s-pod-network.fa3afd769a05392c532a9246ae42288f3d47a25f575ccdd27b789ed517f5289c" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:57.022560 containerd[1571]: 2026-01-16 18:01:56.971 [INFO][4523] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.122.8/26] block=192.168.122.0/26 handle="k8s-pod-network.fa3afd769a05392c532a9246ae42288f3d47a25f575ccdd27b789ed517f5289c" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:57.022560 containerd[1571]: 2026-01-16 18:01:56.972 [INFO][4523] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.122.8/26] handle="k8s-pod-network.fa3afd769a05392c532a9246ae42288f3d47a25f575ccdd27b789ed517f5289c" host="ci-4580-0-0-p-e4bb445d88" Jan 16 18:01:57.022560 containerd[1571]: 2026-01-16 18:01:56.973 [INFO][4523] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 16 18:01:57.022560 containerd[1571]: 2026-01-16 18:01:56.973 [INFO][4523] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.122.8/26] IPv6=[] ContainerID="fa3afd769a05392c532a9246ae42288f3d47a25f575ccdd27b789ed517f5289c" HandleID="k8s-pod-network.fa3afd769a05392c532a9246ae42288f3d47a25f575ccdd27b789ed517f5289c" Workload="ci--4580--0--0--p--e4bb445d88-k8s-goldmane--666569f655--5gfbb-eth0" Jan 16 18:01:57.024319 containerd[1571]: 2026-01-16 18:01:56.978 [INFO][4493] cni-plugin/k8s.go 418: Populated endpoint ContainerID="fa3afd769a05392c532a9246ae42288f3d47a25f575ccdd27b789ed517f5289c" Namespace="calico-system" Pod="goldmane-666569f655-5gfbb" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-goldmane--666569f655--5gfbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580--0--0--p--e4bb445d88-k8s-goldmane--666569f655--5gfbb-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"91c39312-4785-4237-a846-6ef23d8c4ee9", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 1, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580-0-0-p-e4bb445d88", ContainerID:"", Pod:"goldmane-666569f655-5gfbb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.122.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic6e42d08bb5", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:01:57.024319 containerd[1571]: 2026-01-16 18:01:56.978 [INFO][4493] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.122.8/32] ContainerID="fa3afd769a05392c532a9246ae42288f3d47a25f575ccdd27b789ed517f5289c" Namespace="calico-system" Pod="goldmane-666569f655-5gfbb" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-goldmane--666569f655--5gfbb-eth0" Jan 16 18:01:57.024319 containerd[1571]: 2026-01-16 18:01:56.978 [INFO][4493] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calic6e42d08bb5 ContainerID="fa3afd769a05392c532a9246ae42288f3d47a25f575ccdd27b789ed517f5289c" Namespace="calico-system" Pod="goldmane-666569f655-5gfbb" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-goldmane--666569f655--5gfbb-eth0" Jan 16 18:01:57.024319 containerd[1571]: 2026-01-16 18:01:56.986 [INFO][4493] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="fa3afd769a05392c532a9246ae42288f3d47a25f575ccdd27b789ed517f5289c" Namespace="calico-system" Pod="goldmane-666569f655-5gfbb" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-goldmane--666569f655--5gfbb-eth0" Jan 16 18:01:57.024319 containerd[1571]: 2026-01-16 18:01:56.991 [INFO][4493] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="fa3afd769a05392c532a9246ae42288f3d47a25f575ccdd27b789ed517f5289c" Namespace="calico-system" Pod="goldmane-666569f655-5gfbb" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-goldmane--666569f655--5gfbb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4580--0--0--p--e4bb445d88-k8s-goldmane--666569f655--5gfbb-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"91c39312-4785-4237-a846-6ef23d8c4ee9", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2026, time.January, 16, 18, 1, 27, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4580-0-0-p-e4bb445d88", ContainerID:"fa3afd769a05392c532a9246ae42288f3d47a25f575ccdd27b789ed517f5289c", Pod:"goldmane-666569f655-5gfbb", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.122.8/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calic6e42d08bb5", MAC:"ba:42:4b:43:42:34", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 16 18:01:57.024319 containerd[1571]: 2026-01-16 18:01:57.013 [INFO][4493] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="fa3afd769a05392c532a9246ae42288f3d47a25f575ccdd27b789ed517f5289c" Namespace="calico-system" Pod="goldmane-666569f655-5gfbb" WorkloadEndpoint="ci--4580--0--0--p--e4bb445d88-k8s-goldmane--666569f655--5gfbb-eth0" Jan 16 18:01:57.044000 audit: BPF prog-id=239 op=LOAD Jan 16 18:01:57.041000 audit[4603]: NETFILTER_CFG table=filter:142 family=2 entries=62 op=nft_register_chain pid=4603 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:01:57.041000 audit[4603]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=28368 a0=3 a1=ffffff262210 a2=0 a3=ffff85a4bfa8 items=0 ppid=3859 pid=4603 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:57.041000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:01:57.046000 audit: BPF prog-id=240 op=LOAD Jan 16 18:01:57.046000 audit[4560]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4548 pid=4560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:57.046000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130383666656239333431616466616561356431393639323462393436 Jan 16 18:01:57.047000 audit: BPF prog-id=240 op=UNLOAD Jan 16 18:01:57.047000 audit[4560]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4548 pid=4560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:57.047000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130383666656239333431616466616561356431393639323462393436 Jan 16 18:01:57.049000 audit: BPF prog-id=241 op=LOAD Jan 16 18:01:57.049000 audit[4560]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4548 pid=4560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:57.049000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130383666656239333431616466616561356431393639323462393436 Jan 16 18:01:57.050000 audit: BPF prog-id=242 op=LOAD Jan 16 18:01:57.050000 audit[4560]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4548 pid=4560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:57.050000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130383666656239333431616466616561356431393639323462393436 Jan 16 18:01:57.052000 audit: BPF prog-id=242 op=UNLOAD Jan 16 18:01:57.052000 audit[4560]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4548 pid=4560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:57.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130383666656239333431616466616561356431393639323462393436 Jan 16 18:01:57.052000 audit: BPF prog-id=241 op=UNLOAD Jan 16 18:01:57.052000 audit[4560]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4548 pid=4560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:57.052000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130383666656239333431616466616561356431393639323462393436 Jan 16 18:01:57.053000 audit: BPF prog-id=243 op=LOAD Jan 16 18:01:57.053000 audit[4560]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4548 pid=4560 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:57.053000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130383666656239333431616466616561356431393639323462393436 Jan 16 18:01:57.097291 systemd[1]: Started cri-containerd-9399efa649078a2d578b9537cdba8144b42225fa410d92527e46e5648ddbfe52.scope - libcontainer container 9399efa649078a2d578b9537cdba8144b42225fa410d92527e46e5648ddbfe52. Jan 16 18:01:57.108000 audit[4636]: NETFILTER_CFG table=filter:143 family=2 entries=70 op=nft_register_chain pid=4636 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 16 18:01:57.108000 audit[4636]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=33956 a0=3 a1=ffffda160c30 a2=0 a3=ffffb5e85fa8 items=0 ppid=3859 pid=4636 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:57.108000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 16 18:01:57.117019 containerd[1571]: time="2026-01-16T18:01:57.116849264Z" level=info msg="connecting to shim fa3afd769a05392c532a9246ae42288f3d47a25f575ccdd27b789ed517f5289c" address="unix:///run/containerd/s/7ade392a86252b08f52a0a2f2d5b039214fe01db2e79bd0c00cb2272fb838ffb" namespace=k8s.io protocol=ttrpc version=3 Jan 16 18:01:57.123701 containerd[1571]: time="2026-01-16T18:01:57.123488220Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-lnrx8,Uid:30371a55-b2a2-4dfe-86cf-86b9aadb477e,Namespace:calico-system,Attempt:0,} returns sandbox id \"a086feb9341adfaea5d196924b946f4c1008179c7946aee924a85e8afa0f6789\"" Jan 16 18:01:57.127449 containerd[1571]: time="2026-01-16T18:01:57.127416943Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 18:01:57.138000 audit: BPF prog-id=244 op=LOAD Jan 16 18:01:57.139000 audit: BPF prog-id=245 op=LOAD Jan 16 18:01:57.139000 audit[4613]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4596 pid=4613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:57.139000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933393965666136343930373861326435373862393533376364626138 Jan 16 18:01:57.140000 audit: BPF prog-id=245 op=UNLOAD Jan 16 18:01:57.140000 audit[4613]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4596 pid=4613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:57.140000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933393965666136343930373861326435373862393533376364626138 Jan 16 18:01:57.141000 audit: BPF prog-id=246 op=LOAD Jan 16 18:01:57.141000 audit[4613]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4596 pid=4613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:57.141000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933393965666136343930373861326435373862393533376364626138 Jan 16 18:01:57.142000 audit: BPF prog-id=247 op=LOAD Jan 16 18:01:57.142000 audit[4613]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4596 pid=4613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:57.142000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933393965666136343930373861326435373862393533376364626138 Jan 16 18:01:57.144000 audit: BPF prog-id=247 op=UNLOAD Jan 16 18:01:57.144000 audit[4613]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4596 pid=4613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:57.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933393965666136343930373861326435373862393533376364626138 Jan 16 18:01:57.144000 audit: BPF prog-id=246 op=UNLOAD Jan 16 18:01:57.144000 audit[4613]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4596 pid=4613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:57.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933393965666136343930373861326435373862393533376364626138 Jan 16 18:01:57.144000 audit: BPF prog-id=248 op=LOAD Jan 16 18:01:57.144000 audit[4613]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=4596 pid=4613 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:57.144000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933393965666136343930373861326435373862393533376364626138 Jan 16 18:01:57.186940 systemd[1]: Started cri-containerd-fa3afd769a05392c532a9246ae42288f3d47a25f575ccdd27b789ed517f5289c.scope - libcontainer container fa3afd769a05392c532a9246ae42288f3d47a25f575ccdd27b789ed517f5289c. Jan 16 18:01:57.202219 containerd[1571]: time="2026-01-16T18:01:57.202165006Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-78d49b4987-blz52,Uid:5fef7a00-b60c-4abb-8c75-26be1bbddcd8,Namespace:calico-system,Attempt:0,} returns sandbox id \"9399efa649078a2d578b9537cdba8144b42225fa410d92527e46e5648ddbfe52\"" Jan 16 18:01:57.212000 audit: BPF prog-id=249 op=LOAD Jan 16 18:01:57.213000 audit: BPF prog-id=250 op=LOAD Jan 16 18:01:57.213000 audit[4662]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186180 a2=98 a3=0 items=0 ppid=4647 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:57.213000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661336166643736396130353339326335333261393234366165343232 Jan 16 18:01:57.213000 audit: BPF prog-id=250 op=UNLOAD Jan 16 18:01:57.213000 audit[4662]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4647 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:57.213000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661336166643736396130353339326335333261393234366165343232 Jan 16 18:01:57.214000 audit: BPF prog-id=251 op=LOAD Jan 16 18:01:57.214000 audit[4662]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001863e8 a2=98 a3=0 items=0 ppid=4647 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:57.214000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661336166643736396130353339326335333261393234366165343232 Jan 16 18:01:57.215000 audit: BPF prog-id=252 op=LOAD Jan 16 18:01:57.215000 audit[4662]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000186168 a2=98 a3=0 items=0 ppid=4647 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:57.215000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661336166643736396130353339326335333261393234366165343232 Jan 16 18:01:57.216000 audit: BPF prog-id=252 op=UNLOAD Jan 16 18:01:57.216000 audit[4662]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4647 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:57.216000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661336166643736396130353339326335333261393234366165343232 Jan 16 18:01:57.216000 audit: BPF prog-id=251 op=UNLOAD Jan 16 18:01:57.216000 audit[4662]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4647 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:57.216000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661336166643736396130353339326335333261393234366165343232 Jan 16 18:01:57.216000 audit: BPF prog-id=253 op=LOAD Jan 16 18:01:57.216000 audit[4662]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186648 a2=98 a3=0 items=0 ppid=4647 pid=4662 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:57.216000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6661336166643736396130353339326335333261393234366165343232 Jan 16 18:01:57.260174 containerd[1571]: time="2026-01-16T18:01:57.260117412Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-5gfbb,Uid:91c39312-4785-4237-a846-6ef23d8c4ee9,Namespace:calico-system,Attempt:0,} returns sandbox id \"fa3afd769a05392c532a9246ae42288f3d47a25f575ccdd27b789ed517f5289c\"" Jan 16 18:01:57.480132 containerd[1571]: time="2026-01-16T18:01:57.480030462Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:01:57.482399 containerd[1571]: time="2026-01-16T18:01:57.482147230Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 18:01:57.482399 containerd[1571]: time="2026-01-16T18:01:57.482274915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 16 18:01:57.483379 kubelet[2796]: E0116 18:01:57.482801 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 18:01:57.483379 kubelet[2796]: E0116 18:01:57.482854 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 18:01:57.483379 kubelet[2796]: E0116 18:01:57.483078 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4dchv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lnrx8_calico-system(30371a55-b2a2-4dfe-86cf-86b9aadb477e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 18:01:57.484770 containerd[1571]: time="2026-01-16T18:01:57.484019348Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 18:01:57.838082 containerd[1571]: time="2026-01-16T18:01:57.838033486Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:01:57.839475 containerd[1571]: time="2026-01-16T18:01:57.839409663Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 18:01:57.839572 containerd[1571]: time="2026-01-16T18:01:57.839513507Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 16 18:01:57.840014 kubelet[2796]: E0116 18:01:57.839796 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 18:01:57.840014 kubelet[2796]: E0116 18:01:57.839857 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 18:01:57.840160 kubelet[2796]: E0116 18:01:57.840071 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dpcfg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-78d49b4987-blz52_calico-system(5fef7a00-b60c-4abb-8c75-26be1bbddcd8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 18:01:57.842463 containerd[1571]: time="2026-01-16T18:01:57.840479067Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 18:01:57.842600 kubelet[2796]: E0116 18:01:57.841524 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78d49b4987-blz52" podUID="5fef7a00-b60c-4abb-8c75-26be1bbddcd8" Jan 16 18:01:57.973068 systemd-networkd[1471]: cali978d9b24a7f: Gained IPv6LL Jan 16 18:01:58.039364 systemd-networkd[1471]: cali19b0814fc7a: Gained IPv6LL Jan 16 18:01:58.208072 containerd[1571]: time="2026-01-16T18:01:58.207905783Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:01:58.210126 containerd[1571]: time="2026-01-16T18:01:58.209922264Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 18:01:58.210126 containerd[1571]: time="2026-01-16T18:01:58.209927665Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 16 18:01:58.210601 kubelet[2796]: E0116 18:01:58.210491 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 18:01:58.211099 kubelet[2796]: E0116 18:01:58.210833 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 18:01:58.212750 kubelet[2796]: E0116 18:01:58.211450 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9fk2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5gfbb_calico-system(91c39312-4785-4237-a846-6ef23d8c4ee9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 18:01:58.212974 containerd[1571]: time="2026-01-16T18:01:58.212803421Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 18:01:58.214197 kubelet[2796]: E0116 18:01:58.214025 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5gfbb" podUID="91c39312-4785-4237-a846-6ef23d8c4ee9" Jan 16 18:01:58.551088 containerd[1571]: time="2026-01-16T18:01:58.550872860Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:01:58.553415 containerd[1571]: time="2026-01-16T18:01:58.553331719Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 18:01:58.553561 containerd[1571]: time="2026-01-16T18:01:58.553487285Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 16 18:01:58.554113 kubelet[2796]: E0116 18:01:58.554053 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 18:01:58.554113 kubelet[2796]: E0116 18:01:58.554168 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 18:01:58.555850 kubelet[2796]: E0116 18:01:58.554923 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4dchv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lnrx8_calico-system(30371a55-b2a2-4dfe-86cf-86b9aadb477e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 18:01:58.557545 kubelet[2796]: E0116 18:01:58.557479 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lnrx8" podUID="30371a55-b2a2-4dfe-86cf-86b9aadb477e" Jan 16 18:01:58.812978 kubelet[2796]: E0116 18:01:58.812839 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78d49b4987-blz52" podUID="5fef7a00-b60c-4abb-8c75-26be1bbddcd8" Jan 16 18:01:58.814688 kubelet[2796]: E0116 18:01:58.814601 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5gfbb" podUID="91c39312-4785-4237-a846-6ef23d8c4ee9" Jan 16 18:01:58.815058 kubelet[2796]: E0116 18:01:58.814788 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lnrx8" podUID="30371a55-b2a2-4dfe-86cf-86b9aadb477e" Jan 16 18:01:58.890000 audit[4702]: NETFILTER_CFG table=filter:144 family=2 entries=14 op=nft_register_rule pid=4702 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:58.894575 kernel: kauditd_printk_skb: 233 callbacks suppressed Jan 16 18:01:58.894751 kernel: audit: type=1325 audit(1768586518.890:738): table=filter:144 family=2 entries=14 op=nft_register_rule pid=4702 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:58.890000 audit[4702]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff5a122c0 a2=0 a3=1 items=0 ppid=2936 pid=4702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:58.900019 kernel: audit: type=1300 audit(1768586518.890:738): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=fffff5a122c0 a2=0 a3=1 items=0 ppid=2936 pid=4702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:58.890000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:58.902341 kernel: audit: type=1327 audit(1768586518.890:738): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:58.897000 audit[4702]: NETFILTER_CFG table=nat:145 family=2 entries=20 op=nft_register_rule pid=4702 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:58.904772 kernel: audit: type=1325 audit(1768586518.897:739): table=nat:145 family=2 entries=20 op=nft_register_rule pid=4702 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:01:58.897000 audit[4702]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff5a122c0 a2=0 a3=1 items=0 ppid=2936 pid=4702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:58.908153 kernel: audit: type=1300 audit(1768586518.897:739): arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff5a122c0 a2=0 a3=1 items=0 ppid=2936 pid=4702 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:01:58.897000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:58.909495 kernel: audit: type=1327 audit(1768586518.897:739): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:01:58.997769 systemd-networkd[1471]: calic6e42d08bb5: Gained IPv6LL Jan 16 18:02:01.975349 kubelet[2796]: I0116 18:02:01.975195 2796 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 16 18:02:03.498224 containerd[1571]: time="2026-01-16T18:02:03.498156408Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 18:02:03.860828 containerd[1571]: time="2026-01-16T18:02:03.860588291Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:02:03.862919 containerd[1571]: time="2026-01-16T18:02:03.862575602Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 16 18:02:03.862919 containerd[1571]: time="2026-01-16T18:02:03.862586082Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 18:02:03.863181 kubelet[2796]: E0116 18:02:03.863136 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 18:02:03.863876 kubelet[2796]: E0116 18:02:03.863208 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 18:02:03.863876 kubelet[2796]: E0116 18:02:03.863431 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a6a4e115491b41f3bacfeb616e3df811,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ldhg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76db8c587c-rvj9m_calico-system(53e714ba-c9e3-42df-ae04-537a6b215f2f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 18:02:03.867714 containerd[1571]: time="2026-01-16T18:02:03.867415934Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 18:02:04.199342 containerd[1571]: time="2026-01-16T18:02:04.198939423Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:02:04.200970 containerd[1571]: time="2026-01-16T18:02:04.200744366Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 18:02:04.200970 containerd[1571]: time="2026-01-16T18:02:04.200895251Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 16 18:02:04.201471 kubelet[2796]: E0116 18:02:04.201397 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 18:02:04.201675 kubelet[2796]: E0116 18:02:04.201589 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 18:02:04.202297 kubelet[2796]: E0116 18:02:04.202228 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ldhg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76db8c587c-rvj9m_calico-system(53e714ba-c9e3-42df-ae04-537a6b215f2f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 18:02:04.203641 kubelet[2796]: E0116 18:02:04.203545 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76db8c587c-rvj9m" podUID="53e714ba-c9e3-42df-ae04-537a6b215f2f" Jan 16 18:02:07.498209 containerd[1571]: time="2026-01-16T18:02:07.498148888Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 18:02:07.838271 containerd[1571]: time="2026-01-16T18:02:07.837811387Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:02:07.848163 containerd[1571]: time="2026-01-16T18:02:07.848084919Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 18:02:07.849103 containerd[1571]: time="2026-01-16T18:02:07.848895985Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 18:02:07.849874 kubelet[2796]: E0116 18:02:07.849792 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:02:07.849874 kubelet[2796]: E0116 18:02:07.849851 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:02:07.852898 kubelet[2796]: E0116 18:02:07.850057 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t2ks5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66b59648fd-pnwsh_calico-apiserver(32aa7e73-83e3-4413-b798-e52a9caaa69f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 18:02:07.852898 kubelet[2796]: E0116 18:02:07.851425 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-pnwsh" podUID="32aa7e73-83e3-4413-b798-e52a9caaa69f" Jan 16 18:02:09.497797 containerd[1571]: time="2026-01-16T18:02:09.497742364Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 18:02:09.842236 containerd[1571]: time="2026-01-16T18:02:09.842018331Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:02:09.844231 containerd[1571]: time="2026-01-16T18:02:09.843767625Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 18:02:09.844763 containerd[1571]: time="2026-01-16T18:02:09.843799546Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 18:02:09.844855 kubelet[2796]: E0116 18:02:09.844673 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:02:09.844855 kubelet[2796]: E0116 18:02:09.844727 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:02:09.848535 kubelet[2796]: E0116 18:02:09.848377 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tw62r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66b59648fd-qhvz8_calico-apiserver(9b676f4f-1816-4b2f-88ec-512b756c1b31): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 18:02:09.852606 kubelet[2796]: E0116 18:02:09.852514 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-qhvz8" podUID="9b676f4f-1816-4b2f-88ec-512b756c1b31" Jan 16 18:02:10.501946 containerd[1571]: time="2026-01-16T18:02:10.501885422Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 18:02:10.848891 containerd[1571]: time="2026-01-16T18:02:10.848824261Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:02:10.850564 containerd[1571]: time="2026-01-16T18:02:10.850502271Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 18:02:10.850884 containerd[1571]: time="2026-01-16T18:02:10.850651636Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 16 18:02:10.851269 kubelet[2796]: E0116 18:02:10.850894 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 18:02:10.851269 kubelet[2796]: E0116 18:02:10.850964 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 18:02:10.851269 kubelet[2796]: E0116 18:02:10.851139 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4dchv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lnrx8_calico-system(30371a55-b2a2-4dfe-86cf-86b9aadb477e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 18:02:10.853458 containerd[1571]: time="2026-01-16T18:02:10.853409879Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 18:02:11.396642 containerd[1571]: time="2026-01-16T18:02:11.396558586Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:02:11.398505 containerd[1571]: time="2026-01-16T18:02:11.398318318Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 18:02:11.398505 containerd[1571]: time="2026-01-16T18:02:11.398390040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 16 18:02:11.399084 kubelet[2796]: E0116 18:02:11.398951 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 18:02:11.399310 kubelet[2796]: E0116 18:02:11.399271 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 18:02:11.399712 kubelet[2796]: E0116 18:02:11.399613 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4dchv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lnrx8_calico-system(30371a55-b2a2-4dfe-86cf-86b9aadb477e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 18:02:11.402380 kubelet[2796]: E0116 18:02:11.401958 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lnrx8" podUID="30371a55-b2a2-4dfe-86cf-86b9aadb477e" Jan 16 18:02:13.498509 containerd[1571]: time="2026-01-16T18:02:13.498426380Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 18:02:13.835945 containerd[1571]: time="2026-01-16T18:02:13.835890008Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:02:13.837504 containerd[1571]: time="2026-01-16T18:02:13.837426692Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 18:02:13.837686 containerd[1571]: time="2026-01-16T18:02:13.837556095Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 16 18:02:13.837884 kubelet[2796]: E0116 18:02:13.837848 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 18:02:13.839897 kubelet[2796]: E0116 18:02:13.838505 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 18:02:13.839897 kubelet[2796]: E0116 18:02:13.838792 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9fk2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5gfbb_calico-system(91c39312-4785-4237-a846-6ef23d8c4ee9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 18:02:13.841161 containerd[1571]: time="2026-01-16T18:02:13.840491218Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 18:02:13.841556 kubelet[2796]: E0116 18:02:13.841513 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5gfbb" podUID="91c39312-4785-4237-a846-6ef23d8c4ee9" Jan 16 18:02:14.174717 containerd[1571]: time="2026-01-16T18:02:14.174520559Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:02:14.176646 containerd[1571]: time="2026-01-16T18:02:14.176543854Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 18:02:14.177005 containerd[1571]: time="2026-01-16T18:02:14.176880104Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 16 18:02:14.177939 kubelet[2796]: E0116 18:02:14.177824 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 18:02:14.177939 kubelet[2796]: E0116 18:02:14.177887 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 18:02:14.178431 kubelet[2796]: E0116 18:02:14.178346 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dpcfg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-78d49b4987-blz52_calico-system(5fef7a00-b60c-4abb-8c75-26be1bbddcd8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 18:02:14.180457 kubelet[2796]: E0116 18:02:14.180197 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78d49b4987-blz52" podUID="5fef7a00-b60c-4abb-8c75-26be1bbddcd8" Jan 16 18:02:18.498039 kubelet[2796]: E0116 18:02:18.497963 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-pnwsh" podUID="32aa7e73-83e3-4413-b798-e52a9caaa69f" Jan 16 18:02:18.502049 kubelet[2796]: E0116 18:02:18.501976 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76db8c587c-rvj9m" podUID="53e714ba-c9e3-42df-ae04-537a6b215f2f" Jan 16 18:02:23.498011 kubelet[2796]: E0116 18:02:23.497875 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lnrx8" podUID="30371a55-b2a2-4dfe-86cf-86b9aadb477e" Jan 16 18:02:24.499212 kubelet[2796]: E0116 18:02:24.498227 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-qhvz8" podUID="9b676f4f-1816-4b2f-88ec-512b756c1b31" Jan 16 18:02:26.499648 kubelet[2796]: E0116 18:02:26.499049 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78d49b4987-blz52" podUID="5fef7a00-b60c-4abb-8c75-26be1bbddcd8" Jan 16 18:02:26.499648 kubelet[2796]: E0116 18:02:26.499375 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5gfbb" podUID="91c39312-4785-4237-a846-6ef23d8c4ee9" Jan 16 18:02:29.499424 containerd[1571]: time="2026-01-16T18:02:29.499333462Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 18:02:29.867004 containerd[1571]: time="2026-01-16T18:02:29.866894518Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:02:29.869584 containerd[1571]: time="2026-01-16T18:02:29.869519770Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 18:02:29.869791 containerd[1571]: time="2026-01-16T18:02:29.869652413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 18:02:29.870038 kubelet[2796]: E0116 18:02:29.869989 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:02:29.870488 kubelet[2796]: E0116 18:02:29.870052 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:02:29.870488 kubelet[2796]: E0116 18:02:29.870207 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t2ks5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66b59648fd-pnwsh_calico-apiserver(32aa7e73-83e3-4413-b798-e52a9caaa69f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 18:02:29.872059 kubelet[2796]: E0116 18:02:29.871854 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-pnwsh" podUID="32aa7e73-83e3-4413-b798-e52a9caaa69f" Jan 16 18:02:31.498156 containerd[1571]: time="2026-01-16T18:02:31.497950554Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 18:02:31.837212 containerd[1571]: time="2026-01-16T18:02:31.836936505Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:02:31.840668 containerd[1571]: time="2026-01-16T18:02:31.839160467Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 18:02:31.841087 containerd[1571]: time="2026-01-16T18:02:31.839395552Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 16 18:02:31.841500 kubelet[2796]: E0116 18:02:31.841436 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 18:02:31.841901 kubelet[2796]: E0116 18:02:31.841516 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 18:02:31.841901 kubelet[2796]: E0116 18:02:31.841795 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a6a4e115491b41f3bacfeb616e3df811,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ldhg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76db8c587c-rvj9m_calico-system(53e714ba-c9e3-42df-ae04-537a6b215f2f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 18:02:31.844638 containerd[1571]: time="2026-01-16T18:02:31.844537651Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 18:02:32.380657 containerd[1571]: time="2026-01-16T18:02:32.379761674Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:02:32.384237 containerd[1571]: time="2026-01-16T18:02:32.384070635Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 18:02:32.384237 containerd[1571]: time="2026-01-16T18:02:32.384131556Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 16 18:02:32.384554 kubelet[2796]: E0116 18:02:32.384429 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 18:02:32.384554 kubelet[2796]: E0116 18:02:32.384488 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 18:02:32.385533 kubelet[2796]: E0116 18:02:32.384896 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ldhg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76db8c587c-rvj9m_calico-system(53e714ba-c9e3-42df-ae04-537a6b215f2f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 18:02:32.387586 kubelet[2796]: E0116 18:02:32.386936 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76db8c587c-rvj9m" podUID="53e714ba-c9e3-42df-ae04-537a6b215f2f" Jan 16 18:02:36.499901 containerd[1571]: time="2026-01-16T18:02:36.499835659Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 18:02:36.834843 containerd[1571]: time="2026-01-16T18:02:36.834730606Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:02:36.836892 containerd[1571]: time="2026-01-16T18:02:36.836793842Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 18:02:36.837181 containerd[1571]: time="2026-01-16T18:02:36.836819723Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 16 18:02:36.837450 kubelet[2796]: E0116 18:02:36.837192 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 18:02:36.837450 kubelet[2796]: E0116 18:02:36.837255 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 18:02:36.837450 kubelet[2796]: E0116 18:02:36.837392 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4dchv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lnrx8_calico-system(30371a55-b2a2-4dfe-86cf-86b9aadb477e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 18:02:36.840258 containerd[1571]: time="2026-01-16T18:02:36.840195022Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 18:02:37.184979 containerd[1571]: time="2026-01-16T18:02:37.184444677Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:02:37.186428 containerd[1571]: time="2026-01-16T18:02:37.186356470Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 18:02:37.186537 containerd[1571]: time="2026-01-16T18:02:37.186467112Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 16 18:02:37.186785 kubelet[2796]: E0116 18:02:37.186712 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 18:02:37.186785 kubelet[2796]: E0116 18:02:37.186779 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 18:02:37.187190 kubelet[2796]: E0116 18:02:37.186910 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4dchv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lnrx8_calico-system(30371a55-b2a2-4dfe-86cf-86b9aadb477e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 18:02:37.188172 kubelet[2796]: E0116 18:02:37.188122 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lnrx8" podUID="30371a55-b2a2-4dfe-86cf-86b9aadb477e" Jan 16 18:02:37.499096 containerd[1571]: time="2026-01-16T18:02:37.498952571Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 18:02:37.838099 containerd[1571]: time="2026-01-16T18:02:37.838047488Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:02:37.839710 containerd[1571]: time="2026-01-16T18:02:37.839567074Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 18:02:37.840052 containerd[1571]: time="2026-01-16T18:02:37.839667276Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 18:02:37.840176 kubelet[2796]: E0116 18:02:37.840068 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:02:37.840690 kubelet[2796]: E0116 18:02:37.840188 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:02:37.841454 kubelet[2796]: E0116 18:02:37.840793 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tw62r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66b59648fd-qhvz8_calico-apiserver(9b676f4f-1816-4b2f-88ec-512b756c1b31): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 18:02:37.842951 kubelet[2796]: E0116 18:02:37.842888 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-qhvz8" podUID="9b676f4f-1816-4b2f-88ec-512b756c1b31" Jan 16 18:02:38.501338 containerd[1571]: time="2026-01-16T18:02:38.500534903Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 18:02:38.841346 containerd[1571]: time="2026-01-16T18:02:38.840942502Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:02:38.843026 containerd[1571]: time="2026-01-16T18:02:38.842855095Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 18:02:38.843026 containerd[1571]: time="2026-01-16T18:02:38.842972337Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 16 18:02:38.843665 kubelet[2796]: E0116 18:02:38.843531 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 18:02:38.843665 kubelet[2796]: E0116 18:02:38.843593 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 18:02:38.845106 kubelet[2796]: E0116 18:02:38.844382 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9fk2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5gfbb_calico-system(91c39312-4785-4237-a846-6ef23d8c4ee9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 18:02:38.845954 kubelet[2796]: E0116 18:02:38.845826 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5gfbb" podUID="91c39312-4785-4237-a846-6ef23d8c4ee9" Jan 16 18:02:41.499757 containerd[1571]: time="2026-01-16T18:02:41.499637010Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 18:02:41.842899 containerd[1571]: time="2026-01-16T18:02:41.842820969Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:02:41.844858 containerd[1571]: time="2026-01-16T18:02:41.844734360Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 18:02:41.845432 containerd[1571]: time="2026-01-16T18:02:41.844865482Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 16 18:02:41.845824 kubelet[2796]: E0116 18:02:41.845116 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 18:02:41.845824 kubelet[2796]: E0116 18:02:41.845179 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 18:02:41.846600 kubelet[2796]: E0116 18:02:41.846051 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dpcfg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-78d49b4987-blz52_calico-system(5fef7a00-b60c-4abb-8c75-26be1bbddcd8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 18:02:41.847414 kubelet[2796]: E0116 18:02:41.847285 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78d49b4987-blz52" podUID="5fef7a00-b60c-4abb-8c75-26be1bbddcd8" Jan 16 18:02:43.497778 kubelet[2796]: E0116 18:02:43.497706 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76db8c587c-rvj9m" podUID="53e714ba-c9e3-42df-ae04-537a6b215f2f" Jan 16 18:02:44.503511 kubelet[2796]: E0116 18:02:44.503455 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-pnwsh" podUID="32aa7e73-83e3-4413-b798-e52a9caaa69f" Jan 16 18:02:48.501932 kubelet[2796]: E0116 18:02:48.501866 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lnrx8" podUID="30371a55-b2a2-4dfe-86cf-86b9aadb477e" Jan 16 18:02:49.498514 kubelet[2796]: E0116 18:02:49.497025 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5gfbb" podUID="91c39312-4785-4237-a846-6ef23d8c4ee9" Jan 16 18:02:52.502360 kubelet[2796]: E0116 18:02:52.502275 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-qhvz8" podUID="9b676f4f-1816-4b2f-88ec-512b756c1b31" Jan 16 18:02:53.497598 kubelet[2796]: E0116 18:02:53.497540 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78d49b4987-blz52" podUID="5fef7a00-b60c-4abb-8c75-26be1bbddcd8" Jan 16 18:02:56.502215 kubelet[2796]: E0116 18:02:56.502127 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76db8c587c-rvj9m" podUID="53e714ba-c9e3-42df-ae04-537a6b215f2f" Jan 16 18:02:58.499470 kubelet[2796]: E0116 18:02:58.498773 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-pnwsh" podUID="32aa7e73-83e3-4413-b798-e52a9caaa69f" Jan 16 18:03:00.499134 kubelet[2796]: E0116 18:03:00.498963 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lnrx8" podUID="30371a55-b2a2-4dfe-86cf-86b9aadb477e" Jan 16 18:03:02.504913 kubelet[2796]: E0116 18:03:02.504838 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5gfbb" podUID="91c39312-4785-4237-a846-6ef23d8c4ee9" Jan 16 18:03:05.498118 kubelet[2796]: E0116 18:03:05.498058 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-qhvz8" podUID="9b676f4f-1816-4b2f-88ec-512b756c1b31" Jan 16 18:03:07.497404 kubelet[2796]: E0116 18:03:07.497294 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78d49b4987-blz52" podUID="5fef7a00-b60c-4abb-8c75-26be1bbddcd8" Jan 16 18:03:09.497581 kubelet[2796]: E0116 18:03:09.497517 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-pnwsh" podUID="32aa7e73-83e3-4413-b798-e52a9caaa69f" Jan 16 18:03:10.500147 kubelet[2796]: E0116 18:03:10.499759 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76db8c587c-rvj9m" podUID="53e714ba-c9e3-42df-ae04-537a6b215f2f" Jan 16 18:03:12.511556 kubelet[2796]: E0116 18:03:12.511466 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lnrx8" podUID="30371a55-b2a2-4dfe-86cf-86b9aadb477e" Jan 16 18:03:15.498655 kubelet[2796]: E0116 18:03:15.497291 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5gfbb" podUID="91c39312-4785-4237-a846-6ef23d8c4ee9" Jan 16 18:03:16.498875 kubelet[2796]: E0116 18:03:16.498277 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-qhvz8" podUID="9b676f4f-1816-4b2f-88ec-512b756c1b31" Jan 16 18:03:18.496867 kubelet[2796]: E0116 18:03:18.496815 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78d49b4987-blz52" podUID="5fef7a00-b60c-4abb-8c75-26be1bbddcd8" Jan 16 18:03:21.498184 containerd[1571]: time="2026-01-16T18:03:21.497134424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 18:03:21.843712 containerd[1571]: time="2026-01-16T18:03:21.843433391Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:03:21.846401 containerd[1571]: time="2026-01-16T18:03:21.846280820Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 18:03:21.846567 containerd[1571]: time="2026-01-16T18:03:21.846437422Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 18:03:21.846802 kubelet[2796]: E0116 18:03:21.846743 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:03:21.847215 kubelet[2796]: E0116 18:03:21.846903 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:03:21.847521 kubelet[2796]: E0116 18:03:21.847397 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t2ks5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66b59648fd-pnwsh_calico-apiserver(32aa7e73-83e3-4413-b798-e52a9caaa69f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 18:03:21.849738 kubelet[2796]: E0116 18:03:21.848822 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-pnwsh" podUID="32aa7e73-83e3-4413-b798-e52a9caaa69f" Jan 16 18:03:22.329671 update_engine[1549]: I20260116 18:03:22.328722 1549 prefs.cc:52] certificate-report-to-send-update not present in /var/lib/update_engine/prefs Jan 16 18:03:22.329671 update_engine[1549]: I20260116 18:03:22.328802 1549 prefs.cc:52] certificate-report-to-send-download not present in /var/lib/update_engine/prefs Jan 16 18:03:22.329671 update_engine[1549]: I20260116 18:03:22.329201 1549 prefs.cc:52] aleph-version not present in /var/lib/update_engine/prefs Jan 16 18:03:22.333673 update_engine[1549]: I20260116 18:03:22.333599 1549 omaha_request_params.cc:62] Current group set to developer Jan 16 18:03:22.335528 update_engine[1549]: I20260116 18:03:22.335191 1549 update_attempter.cc:499] Already updated boot flags. Skipping. Jan 16 18:03:22.335812 update_engine[1549]: I20260116 18:03:22.335736 1549 update_attempter.cc:643] Scheduling an action processor start. Jan 16 18:03:22.335812 update_engine[1549]: I20260116 18:03:22.335772 1549 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 16 18:03:22.338210 update_engine[1549]: I20260116 18:03:22.337997 1549 prefs.cc:52] previous-version not present in /var/lib/update_engine/prefs Jan 16 18:03:22.338210 update_engine[1549]: I20260116 18:03:22.338097 1549 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 16 18:03:22.338210 update_engine[1549]: I20260116 18:03:22.338104 1549 omaha_request_action.cc:272] Request: Jan 16 18:03:22.338210 update_engine[1549]: Jan 16 18:03:22.338210 update_engine[1549]: Jan 16 18:03:22.338210 update_engine[1549]: Jan 16 18:03:22.338210 update_engine[1549]: Jan 16 18:03:22.338210 update_engine[1549]: Jan 16 18:03:22.338210 update_engine[1549]: Jan 16 18:03:22.338210 update_engine[1549]: Jan 16 18:03:22.338210 update_engine[1549]: Jan 16 18:03:22.338210 update_engine[1549]: I20260116 18:03:22.338111 1549 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 16 18:03:22.342684 update_engine[1549]: I20260116 18:03:22.341694 1549 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 16 18:03:22.342684 update_engine[1549]: I20260116 18:03:22.342578 1549 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 16 18:03:22.343001 update_engine[1549]: E20260116 18:03:22.342978 1549 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 16 18:03:22.343130 update_engine[1549]: I20260116 18:03:22.343112 1549 libcurl_http_fetcher.cc:283] No HTTP response, retry 1 Jan 16 18:03:22.343816 locksmithd[1599]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_CHECKING_FOR_UPDATE" NewVersion=0.0.0 NewSize=0 Jan 16 18:03:24.499468 containerd[1571]: time="2026-01-16T18:03:24.499238292Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 18:03:24.872476 containerd[1571]: time="2026-01-16T18:03:24.872193248Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:03:24.875274 containerd[1571]: time="2026-01-16T18:03:24.875208358Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 18:03:24.875736 containerd[1571]: time="2026-01-16T18:03:24.875352799Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 16 18:03:24.875817 kubelet[2796]: E0116 18:03:24.875768 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 18:03:24.876187 kubelet[2796]: E0116 18:03:24.875824 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 18:03:24.876371 kubelet[2796]: E0116 18:03:24.876307 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a6a4e115491b41f3bacfeb616e3df811,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ldhg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76db8c587c-rvj9m_calico-system(53e714ba-c9e3-42df-ae04-537a6b215f2f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 18:03:24.877985 containerd[1571]: time="2026-01-16T18:03:24.877899464Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 18:03:25.250740 containerd[1571]: time="2026-01-16T18:03:25.250052316Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:03:25.251887 containerd[1571]: time="2026-01-16T18:03:25.251725892Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 18:03:25.251887 containerd[1571]: time="2026-01-16T18:03:25.251837413Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 16 18:03:25.253694 kubelet[2796]: E0116 18:03:25.252737 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 18:03:25.253694 kubelet[2796]: E0116 18:03:25.252803 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 18:03:25.253694 kubelet[2796]: E0116 18:03:25.253028 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4dchv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lnrx8_calico-system(30371a55-b2a2-4dfe-86cf-86b9aadb477e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 18:03:25.253978 containerd[1571]: time="2026-01-16T18:03:25.253388989Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 18:03:25.594852 containerd[1571]: time="2026-01-16T18:03:25.594779131Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:03:25.596556 containerd[1571]: time="2026-01-16T18:03:25.596446267Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 18:03:25.596556 containerd[1571]: time="2026-01-16T18:03:25.596499708Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 16 18:03:25.597272 kubelet[2796]: E0116 18:03:25.596740 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 18:03:25.597272 kubelet[2796]: E0116 18:03:25.596794 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 18:03:25.597272 kubelet[2796]: E0116 18:03:25.597028 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ldhg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76db8c587c-rvj9m_calico-system(53e714ba-c9e3-42df-ae04-537a6b215f2f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 18:03:25.598649 containerd[1571]: time="2026-01-16T18:03:25.597866401Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 18:03:25.598762 kubelet[2796]: E0116 18:03:25.598467 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76db8c587c-rvj9m" podUID="53e714ba-c9e3-42df-ae04-537a6b215f2f" Jan 16 18:03:25.947715 containerd[1571]: time="2026-01-16T18:03:25.947451584Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:03:25.950651 containerd[1571]: time="2026-01-16T18:03:25.949146800Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 18:03:25.950651 containerd[1571]: time="2026-01-16T18:03:25.949208761Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 16 18:03:25.951027 kubelet[2796]: E0116 18:03:25.950967 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 18:03:25.951456 kubelet[2796]: E0116 18:03:25.951131 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 18:03:25.951944 kubelet[2796]: E0116 18:03:25.951868 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4dchv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lnrx8_calico-system(30371a55-b2a2-4dfe-86cf-86b9aadb477e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 18:03:25.953152 kubelet[2796]: E0116 18:03:25.953098 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lnrx8" podUID="30371a55-b2a2-4dfe-86cf-86b9aadb477e" Jan 16 18:03:28.499833 containerd[1571]: time="2026-01-16T18:03:28.499210320Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 18:03:28.831650 containerd[1571]: time="2026-01-16T18:03:28.831578432Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:03:28.833992 containerd[1571]: time="2026-01-16T18:03:28.833618211Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 18:03:28.834380 containerd[1571]: time="2026-01-16T18:03:28.833642212Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 18:03:28.834517 kubelet[2796]: E0116 18:03:28.834357 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:03:28.834517 kubelet[2796]: E0116 18:03:28.834422 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:03:28.835589 kubelet[2796]: E0116 18:03:28.835331 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tw62r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66b59648fd-qhvz8_calico-apiserver(9b676f4f-1816-4b2f-88ec-512b756c1b31): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 18:03:28.836468 containerd[1571]: time="2026-01-16T18:03:28.835247227Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 18:03:28.837487 kubelet[2796]: E0116 18:03:28.837418 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-qhvz8" podUID="9b676f4f-1816-4b2f-88ec-512b756c1b31" Jan 16 18:03:29.177715 containerd[1571]: time="2026-01-16T18:03:29.177525903Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:03:29.179858 containerd[1571]: time="2026-01-16T18:03:29.179760085Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 18:03:29.180010 containerd[1571]: time="2026-01-16T18:03:29.179788285Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 16 18:03:29.180575 kubelet[2796]: E0116 18:03:29.180378 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 18:03:29.180575 kubelet[2796]: E0116 18:03:29.180523 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 18:03:29.181228 kubelet[2796]: E0116 18:03:29.181075 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9fk2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5gfbb_calico-system(91c39312-4785-4237-a846-6ef23d8c4ee9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 18:03:29.182446 kubelet[2796]: E0116 18:03:29.182407 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5gfbb" podUID="91c39312-4785-4237-a846-6ef23d8c4ee9" Jan 16 18:03:32.253508 update_engine[1549]: I20260116 18:03:32.253428 1549 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 16 18:03:32.253896 update_engine[1549]: I20260116 18:03:32.253536 1549 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 16 18:03:32.253958 update_engine[1549]: I20260116 18:03:32.253927 1549 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 16 18:03:32.254552 update_engine[1549]: E20260116 18:03:32.254508 1549 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 16 18:03:32.254663 update_engine[1549]: I20260116 18:03:32.254599 1549 libcurl_http_fetcher.cc:283] No HTTP response, retry 2 Jan 16 18:03:32.501226 containerd[1571]: time="2026-01-16T18:03:32.500866848Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 18:03:32.848151 containerd[1571]: time="2026-01-16T18:03:32.847946983Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:03:32.851388 containerd[1571]: time="2026-01-16T18:03:32.851211294Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 18:03:32.851388 containerd[1571]: time="2026-01-16T18:03:32.851237734Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 16 18:03:32.851610 kubelet[2796]: E0116 18:03:32.851513 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 18:03:32.851610 kubelet[2796]: E0116 18:03:32.851575 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 18:03:32.852129 kubelet[2796]: E0116 18:03:32.851764 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dpcfg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-78d49b4987-blz52_calico-system(5fef7a00-b60c-4abb-8c75-26be1bbddcd8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 18:03:32.853323 kubelet[2796]: E0116 18:03:32.853247 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78d49b4987-blz52" podUID="5fef7a00-b60c-4abb-8c75-26be1bbddcd8" Jan 16 18:03:33.497459 kubelet[2796]: E0116 18:03:33.497407 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-pnwsh" podUID="32aa7e73-83e3-4413-b798-e52a9caaa69f" Jan 16 18:03:36.501326 kubelet[2796]: E0116 18:03:36.501205 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76db8c587c-rvj9m" podUID="53e714ba-c9e3-42df-ae04-537a6b215f2f" Jan 16 18:03:36.846000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-167.235.246.183:22-68.220.241.50:53250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:36.847396 systemd[1]: Started sshd@7-167.235.246.183:22-68.220.241.50:53250.service - OpenSSH per-connection server daemon (68.220.241.50:53250). Jan 16 18:03:36.853673 kernel: audit: type=1130 audit(1768586616.846:740): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-167.235.246.183:22-68.220.241.50:53250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:37.431000 audit[4902]: USER_ACCT pid=4902 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:37.432743 sshd[4902]: Accepted publickey for core from 68.220.241.50 port 53250 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:03:37.435775 kernel: audit: type=1101 audit(1768586617.431:741): pid=4902 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:37.437000 audit[4902]: CRED_ACQ pid=4902 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:37.441142 sshd-session[4902]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:03:37.444358 kernel: audit: type=1103 audit(1768586617.437:742): pid=4902 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:37.444457 kernel: audit: type=1006 audit(1768586617.439:743): pid=4902 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=9 res=1 Jan 16 18:03:37.439000 audit[4902]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffff2eab80 a2=3 a3=0 items=0 ppid=1 pid=4902 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:37.447490 kernel: audit: type=1300 audit(1768586617.439:743): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffff2eab80 a2=3 a3=0 items=0 ppid=1 pid=4902 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=9 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:37.439000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:03:37.449776 kernel: audit: type=1327 audit(1768586617.439:743): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:03:37.452887 systemd-logind[1548]: New session 9 of user core. Jan 16 18:03:37.462267 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 16 18:03:37.467000 audit[4902]: USER_START pid=4902 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:37.472000 audit[4906]: CRED_ACQ pid=4906 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:37.475243 kernel: audit: type=1105 audit(1768586617.467:744): pid=4902 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:37.475442 kernel: audit: type=1103 audit(1768586617.472:745): pid=4906 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:37.871690 sshd[4906]: Connection closed by 68.220.241.50 port 53250 Jan 16 18:03:37.872139 sshd-session[4902]: pam_unix(sshd:session): session closed for user core Jan 16 18:03:37.874000 audit[4902]: USER_END pid=4902 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:37.877000 audit[4902]: CRED_DISP pid=4902 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:37.883134 kernel: audit: type=1106 audit(1768586617.874:746): pid=4902 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:37.883237 kernel: audit: type=1104 audit(1768586617.877:747): pid=4902 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:37.884072 systemd[1]: sshd@7-167.235.246.183:22-68.220.241.50:53250.service: Deactivated successfully. Jan 16 18:03:37.886000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-167.235.246.183:22-68.220.241.50:53250 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:37.889432 systemd[1]: session-9.scope: Deactivated successfully. Jan 16 18:03:37.894230 systemd-logind[1548]: Session 9 logged out. Waiting for processes to exit. Jan 16 18:03:37.898054 systemd-logind[1548]: Removed session 9. Jan 16 18:03:40.502155 kubelet[2796]: E0116 18:03:40.502086 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lnrx8" podUID="30371a55-b2a2-4dfe-86cf-86b9aadb477e" Jan 16 18:03:42.248499 update_engine[1549]: I20260116 18:03:42.247680 1549 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 16 18:03:42.248499 update_engine[1549]: I20260116 18:03:42.247776 1549 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 16 18:03:42.248499 update_engine[1549]: I20260116 18:03:42.248139 1549 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 16 18:03:42.249155 update_engine[1549]: E20260116 18:03:42.249125 1549 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 16 18:03:42.249341 update_engine[1549]: I20260116 18:03:42.249262 1549 libcurl_http_fetcher.cc:283] No HTTP response, retry 3 Jan 16 18:03:42.501768 kubelet[2796]: E0116 18:03:42.501594 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5gfbb" podUID="91c39312-4785-4237-a846-6ef23d8c4ee9" Jan 16 18:03:42.985101 systemd[1]: Started sshd@8-167.235.246.183:22-68.220.241.50:53612.service - OpenSSH per-connection server daemon (68.220.241.50:53612). Jan 16 18:03:42.990695 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 18:03:42.990847 kernel: audit: type=1130 audit(1768586622.985:749): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-167.235.246.183:22-68.220.241.50:53612 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:42.985000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-167.235.246.183:22-68.220.241.50:53612 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:43.498546 kubelet[2796]: E0116 18:03:43.498482 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-qhvz8" podUID="9b676f4f-1816-4b2f-88ec-512b756c1b31" Jan 16 18:03:43.544445 sshd[4921]: Accepted publickey for core from 68.220.241.50 port 53612 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:03:43.543000 audit[4921]: USER_ACCT pid=4921 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:43.548964 kernel: audit: type=1101 audit(1768586623.543:750): pid=4921 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:43.549137 kernel: audit: type=1103 audit(1768586623.548:751): pid=4921 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:43.548000 audit[4921]: CRED_ACQ pid=4921 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:43.551334 sshd-session[4921]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:03:43.553664 kernel: audit: type=1006 audit(1768586623.548:752): pid=4921 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=10 res=1 Jan 16 18:03:43.553792 kernel: audit: type=1300 audit(1768586623.548:752): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffee39cd00 a2=3 a3=0 items=0 ppid=1 pid=4921 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:43.548000 audit[4921]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffee39cd00 a2=3 a3=0 items=0 ppid=1 pid=4921 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:43.548000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:03:43.556854 kernel: audit: type=1327 audit(1768586623.548:752): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:03:43.563778 systemd-logind[1548]: New session 10 of user core. Jan 16 18:03:43.574853 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 16 18:03:43.579000 audit[4921]: USER_START pid=4921 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:43.583000 audit[4925]: CRED_ACQ pid=4925 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:43.584681 kernel: audit: type=1105 audit(1768586623.579:753): pid=4921 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:43.589653 kernel: audit: type=1103 audit(1768586623.583:754): pid=4925 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:43.976572 sshd[4925]: Connection closed by 68.220.241.50 port 53612 Jan 16 18:03:43.979709 sshd-session[4921]: pam_unix(sshd:session): session closed for user core Jan 16 18:03:43.982000 audit[4921]: USER_END pid=4921 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:43.983000 audit[4921]: CRED_DISP pid=4921 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:43.988125 kernel: audit: type=1106 audit(1768586623.982:755): pid=4921 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:43.988217 kernel: audit: type=1104 audit(1768586623.983:756): pid=4921 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:43.988564 systemd[1]: sshd@8-167.235.246.183:22-68.220.241.50:53612.service: Deactivated successfully. Jan 16 18:03:43.989000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-167.235.246.183:22-68.220.241.50:53612 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:43.994486 systemd[1]: session-10.scope: Deactivated successfully. Jan 16 18:03:43.999511 systemd-logind[1548]: Session 10 logged out. Waiting for processes to exit. Jan 16 18:03:44.000862 systemd-logind[1548]: Removed session 10. Jan 16 18:03:45.498690 kubelet[2796]: E0116 18:03:45.498059 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78d49b4987-blz52" podUID="5fef7a00-b60c-4abb-8c75-26be1bbddcd8" Jan 16 18:03:48.499290 kubelet[2796]: E0116 18:03:48.498972 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-pnwsh" podUID="32aa7e73-83e3-4413-b798-e52a9caaa69f" Jan 16 18:03:48.500930 kubelet[2796]: E0116 18:03:48.500335 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76db8c587c-rvj9m" podUID="53e714ba-c9e3-42df-ae04-537a6b215f2f" Jan 16 18:03:49.098991 systemd[1]: Started sshd@9-167.235.246.183:22-68.220.241.50:53616.service - OpenSSH per-connection server daemon (68.220.241.50:53616). Jan 16 18:03:49.098000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-167.235.246.183:22-68.220.241.50:53616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:49.100791 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 18:03:49.100898 kernel: audit: type=1130 audit(1768586629.098:758): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-167.235.246.183:22-68.220.241.50:53616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:49.663992 sshd[4938]: Accepted publickey for core from 68.220.241.50 port 53616 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:03:49.662000 audit[4938]: USER_ACCT pid=4938 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:49.667000 audit[4938]: CRED_ACQ pid=4938 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:49.669262 sshd-session[4938]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:03:49.670882 kernel: audit: type=1101 audit(1768586629.662:759): pid=4938 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:49.670971 kernel: audit: type=1103 audit(1768586629.667:760): pid=4938 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:49.673020 kernel: audit: type=1006 audit(1768586629.667:761): pid=4938 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 16 18:03:49.673091 kernel: audit: type=1300 audit(1768586629.667:761): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd8ace9f0 a2=3 a3=0 items=0 ppid=1 pid=4938 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:49.667000 audit[4938]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd8ace9f0 a2=3 a3=0 items=0 ppid=1 pid=4938 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:49.667000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:03:49.676314 kernel: audit: type=1327 audit(1768586629.667:761): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:03:49.682589 systemd-logind[1548]: New session 11 of user core. Jan 16 18:03:49.688348 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 16 18:03:49.693000 audit[4938]: USER_START pid=4938 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:49.697000 audit[4942]: CRED_ACQ pid=4942 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:49.700206 kernel: audit: type=1105 audit(1768586629.693:762): pid=4938 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:49.700436 kernel: audit: type=1103 audit(1768586629.697:763): pid=4942 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:50.059004 sshd[4942]: Connection closed by 68.220.241.50 port 53616 Jan 16 18:03:50.061147 sshd-session[4938]: pam_unix(sshd:session): session closed for user core Jan 16 18:03:50.062000 audit[4938]: USER_END pid=4938 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:50.067265 systemd[1]: sshd@9-167.235.246.183:22-68.220.241.50:53616.service: Deactivated successfully. Jan 16 18:03:50.062000 audit[4938]: CRED_DISP pid=4938 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:50.071833 kernel: audit: type=1106 audit(1768586630.062:764): pid=4938 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:50.071937 kernel: audit: type=1104 audit(1768586630.062:765): pid=4938 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:50.068000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-167.235.246.183:22-68.220.241.50:53616 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:50.074167 systemd[1]: session-11.scope: Deactivated successfully. Jan 16 18:03:50.079144 systemd-logind[1548]: Session 11 logged out. Waiting for processes to exit. Jan 16 18:03:50.080678 systemd-logind[1548]: Removed session 11. Jan 16 18:03:50.169136 systemd[1]: Started sshd@10-167.235.246.183:22-68.220.241.50:53618.service - OpenSSH per-connection server daemon (68.220.241.50:53618). Jan 16 18:03:50.168000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-167.235.246.183:22-68.220.241.50:53618 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:50.722651 sshd[4955]: Accepted publickey for core from 68.220.241.50 port 53618 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:03:50.721000 audit[4955]: USER_ACCT pid=4955 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:50.724000 audit[4955]: CRED_ACQ pid=4955 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:50.725000 audit[4955]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe7bfc4e0 a2=3 a3=0 items=0 ppid=1 pid=4955 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:50.725000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:03:50.727958 sshd-session[4955]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:03:50.737028 systemd-logind[1548]: New session 12 of user core. Jan 16 18:03:50.744883 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 16 18:03:50.749000 audit[4955]: USER_START pid=4955 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:50.752000 audit[4959]: CRED_ACQ pid=4959 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:51.170478 sshd[4959]: Connection closed by 68.220.241.50 port 53618 Jan 16 18:03:51.171113 sshd-session[4955]: pam_unix(sshd:session): session closed for user core Jan 16 18:03:51.174000 audit[4955]: USER_END pid=4955 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:51.174000 audit[4955]: CRED_DISP pid=4955 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:51.182501 systemd[1]: sshd@10-167.235.246.183:22-68.220.241.50:53618.service: Deactivated successfully. Jan 16 18:03:51.182000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-167.235.246.183:22-68.220.241.50:53618 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:51.186096 systemd[1]: session-12.scope: Deactivated successfully. Jan 16 18:03:51.189395 systemd-logind[1548]: Session 12 logged out. Waiting for processes to exit. Jan 16 18:03:51.193839 systemd-logind[1548]: Removed session 12. Jan 16 18:03:51.281000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-167.235.246.183:22-68.220.241.50:53622 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:51.282001 systemd[1]: Started sshd@11-167.235.246.183:22-68.220.241.50:53622.service - OpenSSH per-connection server daemon (68.220.241.50:53622). Jan 16 18:03:51.846000 audit[4969]: USER_ACCT pid=4969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:51.848807 sshd[4969]: Accepted publickey for core from 68.220.241.50 port 53622 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:03:51.849000 audit[4969]: CRED_ACQ pid=4969 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:51.849000 audit[4969]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc7c544f0 a2=3 a3=0 items=0 ppid=1 pid=4969 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:51.849000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:03:51.851451 sshd-session[4969]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:03:51.861882 systemd-logind[1548]: New session 13 of user core. Jan 16 18:03:51.869028 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 16 18:03:51.874000 audit[4969]: USER_START pid=4969 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:51.877000 audit[4973]: CRED_ACQ pid=4973 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:52.243920 update_engine[1549]: I20260116 18:03:52.243726 1549 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 16 18:03:52.243920 update_engine[1549]: I20260116 18:03:52.243834 1549 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 16 18:03:52.244677 update_engine[1549]: I20260116 18:03:52.244493 1549 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 16 18:03:52.245657 update_engine[1549]: E20260116 18:03:52.245383 1549 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 16 18:03:52.245657 update_engine[1549]: I20260116 18:03:52.245486 1549 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 16 18:03:52.245657 update_engine[1549]: I20260116 18:03:52.245496 1549 omaha_request_action.cc:617] Omaha request response: Jan 16 18:03:52.245657 update_engine[1549]: E20260116 18:03:52.245597 1549 omaha_request_action.cc:636] Omaha request network transfer failed. Jan 16 18:03:52.245657 update_engine[1549]: I20260116 18:03:52.245659 1549 action_processor.cc:68] ActionProcessor::ActionComplete: OmahaRequestAction action failed. Aborting processing. Jan 16 18:03:52.245657 update_engine[1549]: I20260116 18:03:52.245671 1549 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 16 18:03:52.245657 update_engine[1549]: I20260116 18:03:52.245676 1549 update_attempter.cc:306] Processing Done. Jan 16 18:03:52.246172 update_engine[1549]: E20260116 18:03:52.245692 1549 update_attempter.cc:619] Update failed. Jan 16 18:03:52.246172 update_engine[1549]: I20260116 18:03:52.245698 1549 utils.cc:600] Converting error code 2000 to kActionCodeOmahaErrorInHTTPResponse Jan 16 18:03:52.246172 update_engine[1549]: I20260116 18:03:52.245705 1549 payload_state.cc:97] Updating payload state for error code: 37 (kActionCodeOmahaErrorInHTTPResponse) Jan 16 18:03:52.246172 update_engine[1549]: I20260116 18:03:52.245716 1549 payload_state.cc:103] Ignoring failures until we get a valid Omaha response. Jan 16 18:03:52.246172 update_engine[1549]: I20260116 18:03:52.245929 1549 action_processor.cc:36] ActionProcessor::StartProcessing: OmahaRequestAction Jan 16 18:03:52.246172 update_engine[1549]: I20260116 18:03:52.245966 1549 omaha_request_action.cc:271] Posting an Omaha request to disabled Jan 16 18:03:52.246172 update_engine[1549]: I20260116 18:03:52.246088 1549 omaha_request_action.cc:272] Request: Jan 16 18:03:52.246172 update_engine[1549]: Jan 16 18:03:52.246172 update_engine[1549]: Jan 16 18:03:52.246172 update_engine[1549]: Jan 16 18:03:52.246172 update_engine[1549]: Jan 16 18:03:52.246172 update_engine[1549]: Jan 16 18:03:52.246172 update_engine[1549]: Jan 16 18:03:52.246172 update_engine[1549]: I20260116 18:03:52.246099 1549 libcurl_http_fetcher.cc:47] Starting/Resuming transfer Jan 16 18:03:52.246172 update_engine[1549]: I20260116 18:03:52.246127 1549 libcurl_http_fetcher.cc:151] Setting up curl options for HTTP Jan 16 18:03:52.247243 update_engine[1549]: I20260116 18:03:52.247169 1549 libcurl_http_fetcher.cc:449] Setting up timeout source: 1 seconds. Jan 16 18:03:52.248266 update_engine[1549]: E20260116 18:03:52.247565 1549 libcurl_http_fetcher.cc:266] Unable to get http response code: Could not resolve host: disabled (Domain name not found) Jan 16 18:03:52.248266 update_engine[1549]: I20260116 18:03:52.247671 1549 libcurl_http_fetcher.cc:297] Transfer resulted in an error (0), 0 bytes downloaded Jan 16 18:03:52.248266 update_engine[1549]: I20260116 18:03:52.247684 1549 omaha_request_action.cc:617] Omaha request response: Jan 16 18:03:52.248266 update_engine[1549]: I20260116 18:03:52.247690 1549 action_processor.cc:65] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 16 18:03:52.248266 update_engine[1549]: I20260116 18:03:52.247695 1549 action_processor.cc:73] ActionProcessor::ActionComplete: finished last action of type OmahaRequestAction Jan 16 18:03:52.248266 update_engine[1549]: I20260116 18:03:52.247699 1549 update_attempter.cc:306] Processing Done. Jan 16 18:03:52.248266 update_engine[1549]: I20260116 18:03:52.247704 1549 update_attempter.cc:310] Error event sent. Jan 16 18:03:52.248266 update_engine[1549]: I20260116 18:03:52.247711 1549 update_check_scheduler.cc:74] Next update check in 41m45s Jan 16 18:03:52.249030 locksmithd[1599]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_REPORTING_ERROR_EVENT" NewVersion=0.0.0 NewSize=0 Jan 16 18:03:52.249030 locksmithd[1599]: LastCheckedTime=0 Progress=0 CurrentOperation="UPDATE_STATUS_IDLE" NewVersion=0.0.0 NewSize=0 Jan 16 18:03:52.268792 sshd[4973]: Connection closed by 68.220.241.50 port 53622 Jan 16 18:03:52.269481 sshd-session[4969]: pam_unix(sshd:session): session closed for user core Jan 16 18:03:52.272000 audit[4969]: USER_END pid=4969 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:52.272000 audit[4969]: CRED_DISP pid=4969 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:52.276722 systemd[1]: sshd@11-167.235.246.183:22-68.220.241.50:53622.service: Deactivated successfully. Jan 16 18:03:52.276000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-167.235.246.183:22-68.220.241.50:53622 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:52.280460 systemd[1]: session-13.scope: Deactivated successfully. Jan 16 18:03:52.287058 systemd-logind[1548]: Session 13 logged out. Waiting for processes to exit. Jan 16 18:03:52.288206 systemd-logind[1548]: Removed session 13. Jan 16 18:03:55.497763 kubelet[2796]: E0116 18:03:55.497679 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-qhvz8" podUID="9b676f4f-1816-4b2f-88ec-512b756c1b31" Jan 16 18:03:55.500739 kubelet[2796]: E0116 18:03:55.500680 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lnrx8" podUID="30371a55-b2a2-4dfe-86cf-86b9aadb477e" Jan 16 18:03:57.378000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-167.235.246.183:22-68.220.241.50:47574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:57.378985 systemd[1]: Started sshd@12-167.235.246.183:22-68.220.241.50:47574.service - OpenSSH per-connection server daemon (68.220.241.50:47574). Jan 16 18:03:57.382562 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 16 18:03:57.382606 kernel: audit: type=1130 audit(1768586637.378:785): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-167.235.246.183:22-68.220.241.50:47574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:57.497297 kubelet[2796]: E0116 18:03:57.497244 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5gfbb" podUID="91c39312-4785-4237-a846-6ef23d8c4ee9" Jan 16 18:03:57.499216 kubelet[2796]: E0116 18:03:57.499133 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78d49b4987-blz52" podUID="5fef7a00-b60c-4abb-8c75-26be1bbddcd8" Jan 16 18:03:57.924000 audit[4992]: USER_ACCT pid=4992 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:57.927998 sshd[4992]: Accepted publickey for core from 68.220.241.50 port 47574 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:03:57.929744 kernel: audit: type=1101 audit(1768586637.924:786): pid=4992 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:57.929890 kernel: audit: type=1103 audit(1768586637.928:787): pid=4992 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:57.928000 audit[4992]: CRED_ACQ pid=4992 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:57.931059 sshd-session[4992]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:03:57.933553 kernel: audit: type=1006 audit(1768586637.928:788): pid=4992 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=14 res=1 Jan 16 18:03:57.933675 kernel: audit: type=1300 audit(1768586637.928:788): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc815c1d0 a2=3 a3=0 items=0 ppid=1 pid=4992 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:57.928000 audit[4992]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc815c1d0 a2=3 a3=0 items=0 ppid=1 pid=4992 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:03:57.928000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:03:57.938056 kernel: audit: type=1327 audit(1768586637.928:788): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:03:57.945984 systemd-logind[1548]: New session 14 of user core. Jan 16 18:03:57.952313 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 16 18:03:57.958000 audit[4992]: USER_START pid=4992 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:57.963000 audit[4996]: CRED_ACQ pid=4996 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:57.966429 kernel: audit: type=1105 audit(1768586637.958:789): pid=4992 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:57.966521 kernel: audit: type=1103 audit(1768586637.963:790): pid=4996 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:58.376268 sshd[4996]: Connection closed by 68.220.241.50 port 47574 Jan 16 18:03:58.376670 sshd-session[4992]: pam_unix(sshd:session): session closed for user core Jan 16 18:03:58.381000 audit[4992]: USER_END pid=4992 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:58.383000 audit[4992]: CRED_DISP pid=4992 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:58.388057 kernel: audit: type=1106 audit(1768586638.381:791): pid=4992 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:58.388340 kernel: audit: type=1104 audit(1768586638.383:792): pid=4992 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:03:58.390024 systemd[1]: sshd@12-167.235.246.183:22-68.220.241.50:47574.service: Deactivated successfully. Jan 16 18:03:58.389000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-167.235.246.183:22-68.220.241.50:47574 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:03:58.393388 systemd[1]: session-14.scope: Deactivated successfully. Jan 16 18:03:58.400099 systemd-logind[1548]: Session 14 logged out. Waiting for processes to exit. Jan 16 18:03:58.402441 systemd-logind[1548]: Removed session 14. Jan 16 18:03:59.498673 kubelet[2796]: E0116 18:03:59.498320 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-pnwsh" podUID="32aa7e73-83e3-4413-b798-e52a9caaa69f" Jan 16 18:04:02.502255 kubelet[2796]: E0116 18:04:02.501948 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76db8c587c-rvj9m" podUID="53e714ba-c9e3-42df-ae04-537a6b215f2f" Jan 16 18:04:03.498731 systemd[1]: Started sshd@13-167.235.246.183:22-68.220.241.50:60006.service - OpenSSH per-connection server daemon (68.220.241.50:60006). Jan 16 18:04:03.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-167.235.246.183:22-68.220.241.50:60006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:03.502696 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 18:04:03.502839 kernel: audit: type=1130 audit(1768586643.498:794): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-167.235.246.183:22-68.220.241.50:60006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:04.066000 audit[5033]: USER_ACCT pid=5033 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:04.070877 sshd[5033]: Accepted publickey for core from 68.220.241.50 port 60006 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:04:04.070000 audit[5033]: CRED_ACQ pid=5033 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:04.074486 kernel: audit: type=1101 audit(1768586644.066:795): pid=5033 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:04.074588 kernel: audit: type=1103 audit(1768586644.070:796): pid=5033 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:04.074608 kernel: audit: type=1006 audit(1768586644.070:797): pid=5033 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=15 res=1 Jan 16 18:04:04.074394 sshd-session[5033]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:04:04.070000 audit[5033]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe2dec5c0 a2=3 a3=0 items=0 ppid=1 pid=5033 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.078119 kernel: audit: type=1300 audit(1768586644.070:797): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe2dec5c0 a2=3 a3=0 items=0 ppid=1 pid=5033 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:04.078281 kernel: audit: type=1327 audit(1768586644.070:797): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:04:04.070000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:04:04.083736 systemd-logind[1548]: New session 15 of user core. Jan 16 18:04:04.089902 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 16 18:04:04.094000 audit[5033]: USER_START pid=5033 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:04.097000 audit[5037]: CRED_ACQ pid=5037 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:04.100743 kernel: audit: type=1105 audit(1768586644.094:798): pid=5033 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:04.100875 kernel: audit: type=1103 audit(1768586644.097:799): pid=5037 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:04.518761 sshd[5037]: Connection closed by 68.220.241.50 port 60006 Jan 16 18:04:04.516431 sshd-session[5033]: pam_unix(sshd:session): session closed for user core Jan 16 18:04:04.519000 audit[5033]: USER_END pid=5033 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:04.519000 audit[5033]: CRED_DISP pid=5033 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:04.529607 kernel: audit: type=1106 audit(1768586644.519:800): pid=5033 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:04.529754 kernel: audit: type=1104 audit(1768586644.519:801): pid=5033 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:04.533358 systemd-logind[1548]: Session 15 logged out. Waiting for processes to exit. Jan 16 18:04:04.534115 systemd[1]: sshd@13-167.235.246.183:22-68.220.241.50:60006.service: Deactivated successfully. Jan 16 18:04:04.535000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-167.235.246.183:22-68.220.241.50:60006 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:04.541116 systemd[1]: session-15.scope: Deactivated successfully. Jan 16 18:04:04.548902 systemd-logind[1548]: Removed session 15. Jan 16 18:04:07.499388 kubelet[2796]: E0116 18:04:07.499270 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-qhvz8" podUID="9b676f4f-1816-4b2f-88ec-512b756c1b31" Jan 16 18:04:08.499296 kubelet[2796]: E0116 18:04:08.498794 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5gfbb" podUID="91c39312-4785-4237-a846-6ef23d8c4ee9" Jan 16 18:04:09.499572 kubelet[2796]: E0116 18:04:09.498408 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lnrx8" podUID="30371a55-b2a2-4dfe-86cf-86b9aadb477e" Jan 16 18:04:09.622051 systemd[1]: Started sshd@14-167.235.246.183:22-68.220.241.50:60012.service - OpenSSH per-connection server daemon (68.220.241.50:60012). Jan 16 18:04:09.624600 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 18:04:09.624763 kernel: audit: type=1130 audit(1768586649.621:803): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-167.235.246.183:22-68.220.241.50:60012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:09.621000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-167.235.246.183:22-68.220.241.50:60012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:10.191000 audit[5051]: USER_ACCT pid=5051 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:10.195800 sshd[5051]: Accepted publickey for core from 68.220.241.50 port 60012 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:04:10.197259 sshd-session[5051]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:04:10.195000 audit[5051]: CRED_ACQ pid=5051 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:10.199651 kernel: audit: type=1101 audit(1768586650.191:804): pid=5051 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:10.199774 kernel: audit: type=1103 audit(1768586650.195:805): pid=5051 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:10.201703 kernel: audit: type=1006 audit(1768586650.195:806): pid=5051 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 16 18:04:10.195000 audit[5051]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffda44b1b0 a2=3 a3=0 items=0 ppid=1 pid=5051 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:10.205220 kernel: audit: type=1300 audit(1768586650.195:806): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffda44b1b0 a2=3 a3=0 items=0 ppid=1 pid=5051 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:10.195000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:04:10.206364 kernel: audit: type=1327 audit(1768586650.195:806): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:04:10.212045 systemd-logind[1548]: New session 16 of user core. Jan 16 18:04:10.224015 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 16 18:04:10.231000 audit[5051]: USER_START pid=5051 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:10.235684 kernel: audit: type=1105 audit(1768586650.231:807): pid=5051 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:10.237000 audit[5055]: CRED_ACQ pid=5055 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:10.241663 kernel: audit: type=1103 audit(1768586650.237:808): pid=5055 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:10.499122 kubelet[2796]: E0116 18:04:10.498984 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78d49b4987-blz52" podUID="5fef7a00-b60c-4abb-8c75-26be1bbddcd8" Jan 16 18:04:10.604280 sshd[5055]: Connection closed by 68.220.241.50 port 60012 Jan 16 18:04:10.605414 sshd-session[5051]: pam_unix(sshd:session): session closed for user core Jan 16 18:04:10.607000 audit[5051]: USER_END pid=5051 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:10.607000 audit[5051]: CRED_DISP pid=5051 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:10.615342 kernel: audit: type=1106 audit(1768586650.607:809): pid=5051 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:10.615544 kernel: audit: type=1104 audit(1768586650.607:810): pid=5051 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:10.616262 systemd[1]: sshd@14-167.235.246.183:22-68.220.241.50:60012.service: Deactivated successfully. Jan 16 18:04:10.616000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-167.235.246.183:22-68.220.241.50:60012 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:10.620003 systemd[1]: session-16.scope: Deactivated successfully. Jan 16 18:04:10.623116 systemd-logind[1548]: Session 16 logged out. Waiting for processes to exit. Jan 16 18:04:10.630422 systemd-logind[1548]: Removed session 16. Jan 16 18:04:10.722000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-167.235.246.183:22-68.220.241.50:60014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:10.722005 systemd[1]: Started sshd@15-167.235.246.183:22-68.220.241.50:60014.service - OpenSSH per-connection server daemon (68.220.241.50:60014). Jan 16 18:04:11.256000 audit[5066]: USER_ACCT pid=5066 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:11.258549 sshd[5066]: Accepted publickey for core from 68.220.241.50 port 60014 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:04:11.259000 audit[5066]: CRED_ACQ pid=5066 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:11.259000 audit[5066]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffccecb600 a2=3 a3=0 items=0 ppid=1 pid=5066 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:11.259000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:04:11.262217 sshd-session[5066]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:04:11.270735 systemd-logind[1548]: New session 17 of user core. Jan 16 18:04:11.274923 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 16 18:04:11.281000 audit[5066]: USER_START pid=5066 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:11.284000 audit[5070]: CRED_ACQ pid=5070 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:11.866667 sshd[5070]: Connection closed by 68.220.241.50 port 60014 Jan 16 18:04:11.867737 sshd-session[5066]: pam_unix(sshd:session): session closed for user core Jan 16 18:04:11.870000 audit[5066]: USER_END pid=5066 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:11.870000 audit[5066]: CRED_DISP pid=5066 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:11.876219 systemd[1]: sshd@15-167.235.246.183:22-68.220.241.50:60014.service: Deactivated successfully. Jan 16 18:04:11.876000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-167.235.246.183:22-68.220.241.50:60014 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:11.880442 systemd[1]: session-17.scope: Deactivated successfully. Jan 16 18:04:11.883173 systemd-logind[1548]: Session 17 logged out. Waiting for processes to exit. Jan 16 18:04:11.886041 systemd-logind[1548]: Removed session 17. Jan 16 18:04:11.983444 systemd[1]: Started sshd@16-167.235.246.183:22-68.220.241.50:60026.service - OpenSSH per-connection server daemon (68.220.241.50:60026). Jan 16 18:04:11.983000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-167.235.246.183:22-68.220.241.50:60026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:12.561562 sshd[5081]: Accepted publickey for core from 68.220.241.50 port 60026 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:04:12.560000 audit[5081]: USER_ACCT pid=5081 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:12.562000 audit[5081]: CRED_ACQ pid=5081 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:12.562000 audit[5081]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffec971a90 a2=3 a3=0 items=0 ppid=1 pid=5081 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:12.562000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:04:12.563827 sshd-session[5081]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:04:12.571105 systemd-logind[1548]: New session 18 of user core. Jan 16 18:04:12.576994 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 16 18:04:12.581000 audit[5081]: USER_START pid=5081 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:12.585000 audit[5086]: CRED_ACQ pid=5086 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:13.809000 audit[5096]: NETFILTER_CFG table=filter:146 family=2 entries=26 op=nft_register_rule pid=5096 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:04:13.809000 audit[5096]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffd5dc28b0 a2=0 a3=1 items=0 ppid=2936 pid=5096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:13.809000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:04:13.816000 audit[5096]: NETFILTER_CFG table=nat:147 family=2 entries=20 op=nft_register_rule pid=5096 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:04:13.816000 audit[5096]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffd5dc28b0 a2=0 a3=1 items=0 ppid=2936 pid=5096 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:13.816000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:04:13.842000 audit[5098]: NETFILTER_CFG table=filter:148 family=2 entries=38 op=nft_register_rule pid=5098 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:04:13.842000 audit[5098]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=fffff9ec2d70 a2=0 a3=1 items=0 ppid=2936 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:13.842000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:04:13.847000 audit[5098]: NETFILTER_CFG table=nat:149 family=2 entries=20 op=nft_register_rule pid=5098 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:04:13.847000 audit[5098]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=fffff9ec2d70 a2=0 a3=1 items=0 ppid=2936 pid=5098 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:13.847000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:04:13.892774 sshd[5086]: Connection closed by 68.220.241.50 port 60026 Jan 16 18:04:13.893759 sshd-session[5081]: pam_unix(sshd:session): session closed for user core Jan 16 18:04:13.895000 audit[5081]: USER_END pid=5081 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:13.895000 audit[5081]: CRED_DISP pid=5081 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:13.904000 systemd[1]: sshd@16-167.235.246.183:22-68.220.241.50:60026.service: Deactivated successfully. Jan 16 18:04:13.906000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-167.235.246.183:22-68.220.241.50:60026 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:13.909440 systemd[1]: session-18.scope: Deactivated successfully. Jan 16 18:04:13.913365 systemd-logind[1548]: Session 18 logged out. Waiting for processes to exit. Jan 16 18:04:13.915815 systemd-logind[1548]: Removed session 18. Jan 16 18:04:14.003138 systemd[1]: Started sshd@17-167.235.246.183:22-68.220.241.50:33332.service - OpenSSH per-connection server daemon (68.220.241.50:33332). Jan 16 18:04:14.003000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-167.235.246.183:22-68.220.241.50:33332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:14.501014 kubelet[2796]: E0116 18:04:14.499266 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-pnwsh" podUID="32aa7e73-83e3-4413-b798-e52a9caaa69f" Jan 16 18:04:14.558000 audit[5103]: USER_ACCT pid=5103 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:14.561708 sshd[5103]: Accepted publickey for core from 68.220.241.50 port 33332 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:04:14.563000 audit[5103]: CRED_ACQ pid=5103 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:14.564000 audit[5103]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff0f3a870 a2=3 a3=0 items=0 ppid=1 pid=5103 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:14.564000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:04:14.567806 sshd-session[5103]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:04:14.578182 systemd-logind[1548]: New session 19 of user core. Jan 16 18:04:14.582932 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 16 18:04:14.587000 audit[5103]: USER_START pid=5103 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:14.590000 audit[5107]: CRED_ACQ pid=5107 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:15.164294 sshd[5107]: Connection closed by 68.220.241.50 port 33332 Jan 16 18:04:15.165129 sshd-session[5103]: pam_unix(sshd:session): session closed for user core Jan 16 18:04:15.169000 audit[5103]: USER_END pid=5103 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:15.172974 kernel: kauditd_printk_skb: 43 callbacks suppressed Jan 16 18:04:15.173095 kernel: audit: type=1106 audit(1768586655.169:840): pid=5103 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:15.169000 audit[5103]: CRED_DISP pid=5103 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:15.177677 kernel: audit: type=1104 audit(1768586655.169:841): pid=5103 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:15.183688 systemd-logind[1548]: Session 19 logged out. Waiting for processes to exit. Jan 16 18:04:15.184588 systemd[1]: sshd@17-167.235.246.183:22-68.220.241.50:33332.service: Deactivated successfully. Jan 16 18:04:15.185000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-167.235.246.183:22-68.220.241.50:33332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:15.189566 systemd[1]: session-19.scope: Deactivated successfully. Jan 16 18:04:15.189948 kernel: audit: type=1131 audit(1768586655.185:842): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-167.235.246.183:22-68.220.241.50:33332 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:15.194251 systemd-logind[1548]: Removed session 19. Jan 16 18:04:15.288000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-167.235.246.183:22-68.220.241.50:33340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:15.288542 systemd[1]: Started sshd@18-167.235.246.183:22-68.220.241.50:33340.service - OpenSSH per-connection server daemon (68.220.241.50:33340). Jan 16 18:04:15.292698 kernel: audit: type=1130 audit(1768586655.288:843): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-167.235.246.183:22-68.220.241.50:33340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:15.497609 kubelet[2796]: E0116 18:04:15.497439 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76db8c587c-rvj9m" podUID="53e714ba-c9e3-42df-ae04-537a6b215f2f" Jan 16 18:04:15.886000 audit[5117]: USER_ACCT pid=5117 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:15.891056 sshd[5117]: Accepted publickey for core from 68.220.241.50 port 33340 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:04:15.892389 sshd-session[5117]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:04:15.890000 audit[5117]: CRED_ACQ pid=5117 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:15.894446 kernel: audit: type=1101 audit(1768586655.886:844): pid=5117 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:15.894549 kernel: audit: type=1103 audit(1768586655.890:845): pid=5117 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:15.897271 kernel: audit: type=1006 audit(1768586655.890:846): pid=5117 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 16 18:04:15.890000 audit[5117]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffef32ae30 a2=3 a3=0 items=0 ppid=1 pid=5117 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:15.900169 kernel: audit: type=1300 audit(1768586655.890:846): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffef32ae30 a2=3 a3=0 items=0 ppid=1 pid=5117 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:15.890000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:04:15.902697 kernel: audit: type=1327 audit(1768586655.890:846): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:04:15.908154 systemd-logind[1548]: New session 20 of user core. Jan 16 18:04:15.914002 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 16 18:04:15.918000 audit[5117]: USER_START pid=5117 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:15.919000 audit[5121]: CRED_ACQ pid=5121 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:15.923888 kernel: audit: type=1105 audit(1768586655.918:847): pid=5117 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:16.342696 sshd[5121]: Connection closed by 68.220.241.50 port 33340 Jan 16 18:04:16.342353 sshd-session[5117]: pam_unix(sshd:session): session closed for user core Jan 16 18:04:16.347000 audit[5117]: USER_END pid=5117 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:16.347000 audit[5117]: CRED_DISP pid=5117 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:16.354000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-167.235.246.183:22-68.220.241.50:33340 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:16.354940 systemd[1]: sshd@18-167.235.246.183:22-68.220.241.50:33340.service: Deactivated successfully. Jan 16 18:04:16.361553 systemd[1]: session-20.scope: Deactivated successfully. Jan 16 18:04:16.366007 systemd-logind[1548]: Session 20 logged out. Waiting for processes to exit. Jan 16 18:04:16.373306 systemd-logind[1548]: Removed session 20. Jan 16 18:04:18.506000 audit[5133]: NETFILTER_CFG table=filter:150 family=2 entries=26 op=nft_register_rule pid=5133 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:04:18.506000 audit[5133]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffcbb595a0 a2=0 a3=1 items=0 ppid=2936 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:18.506000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:04:18.515000 audit[5133]: NETFILTER_CFG table=nat:151 family=2 entries=104 op=nft_register_chain pid=5133 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 16 18:04:18.515000 audit[5133]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffcbb595a0 a2=0 a3=1 items=0 ppid=2936 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:18.515000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 16 18:04:19.496266 kubelet[2796]: E0116 18:04:19.496161 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-qhvz8" podUID="9b676f4f-1816-4b2f-88ec-512b756c1b31" Jan 16 18:04:19.496266 kubelet[2796]: E0116 18:04:19.496225 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5gfbb" podUID="91c39312-4785-4237-a846-6ef23d8c4ee9" Jan 16 18:04:20.498318 kubelet[2796]: E0116 18:04:20.498181 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lnrx8" podUID="30371a55-b2a2-4dfe-86cf-86b9aadb477e" Jan 16 18:04:21.460595 kernel: kauditd_printk_skb: 10 callbacks suppressed Jan 16 18:04:21.460755 kernel: audit: type=1130 audit(1768586661.456:854): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-167.235.246.183:22-68.220.241.50:33346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:21.456000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-167.235.246.183:22-68.220.241.50:33346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:21.456956 systemd[1]: Started sshd@19-167.235.246.183:22-68.220.241.50:33346.service - OpenSSH per-connection server daemon (68.220.241.50:33346). Jan 16 18:04:22.018000 audit[5135]: USER_ACCT pid=5135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:22.022161 sshd[5135]: Accepted publickey for core from 68.220.241.50 port 33346 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:04:22.023000 audit[5135]: CRED_ACQ pid=5135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:22.026642 kernel: audit: type=1101 audit(1768586662.018:855): pid=5135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:22.026727 kernel: audit: type=1103 audit(1768586662.023:856): pid=5135 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:22.024705 sshd-session[5135]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:04:22.030013 kernel: audit: type=1006 audit(1768586662.023:857): pid=5135 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 16 18:04:22.032563 kernel: audit: type=1300 audit(1768586662.023:857): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc5d6f340 a2=3 a3=0 items=0 ppid=1 pid=5135 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:22.023000 audit[5135]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc5d6f340 a2=3 a3=0 items=0 ppid=1 pid=5135 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:22.023000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:04:22.033941 kernel: audit: type=1327 audit(1768586662.023:857): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:04:22.040589 systemd-logind[1548]: New session 21 of user core. Jan 16 18:04:22.044945 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 16 18:04:22.049000 audit[5135]: USER_START pid=5135 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:22.053000 audit[5139]: CRED_ACQ pid=5139 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:22.057246 kernel: audit: type=1105 audit(1768586662.049:858): pid=5135 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:22.057369 kernel: audit: type=1103 audit(1768586662.053:859): pid=5139 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:22.431151 sshd[5139]: Connection closed by 68.220.241.50 port 33346 Jan 16 18:04:22.433962 sshd-session[5135]: pam_unix(sshd:session): session closed for user core Jan 16 18:04:22.436000 audit[5135]: USER_END pid=5135 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:22.441639 kernel: audit: type=1106 audit(1768586662.436:860): pid=5135 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:22.436000 audit[5135]: CRED_DISP pid=5135 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:22.442940 systemd[1]: sshd@19-167.235.246.183:22-68.220.241.50:33346.service: Deactivated successfully. Jan 16 18:04:22.444000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-167.235.246.183:22-68.220.241.50:33346 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:22.445653 kernel: audit: type=1104 audit(1768586662.436:861): pid=5135 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:22.449969 systemd[1]: session-21.scope: Deactivated successfully. Jan 16 18:04:22.454395 systemd-logind[1548]: Session 21 logged out. Waiting for processes to exit. Jan 16 18:04:22.459189 systemd-logind[1548]: Removed session 21. Jan 16 18:04:24.497959 kubelet[2796]: E0116 18:04:24.497901 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78d49b4987-blz52" podUID="5fef7a00-b60c-4abb-8c75-26be1bbddcd8" Jan 16 18:04:25.497102 kubelet[2796]: E0116 18:04:25.496870 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-pnwsh" podUID="32aa7e73-83e3-4413-b798-e52a9caaa69f" Jan 16 18:04:27.498526 kubelet[2796]: E0116 18:04:27.498298 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76db8c587c-rvj9m" podUID="53e714ba-c9e3-42df-ae04-537a6b215f2f" Jan 16 18:04:27.548643 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 18:04:27.548768 kernel: audit: type=1130 audit(1768586667.545:863): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-167.235.246.183:22-68.220.241.50:33304 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:27.545000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-167.235.246.183:22-68.220.241.50:33304 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:27.545985 systemd[1]: Started sshd@20-167.235.246.183:22-68.220.241.50:33304.service - OpenSSH per-connection server daemon (68.220.241.50:33304). Jan 16 18:04:28.131000 audit[5150]: USER_ACCT pid=5150 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:28.132382 sshd[5150]: Accepted publickey for core from 68.220.241.50 port 33304 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:04:28.134000 audit[5150]: CRED_ACQ pid=5150 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:28.137653 kernel: audit: type=1101 audit(1768586668.131:864): pid=5150 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:28.137732 kernel: audit: type=1103 audit(1768586668.134:865): pid=5150 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:28.137089 sshd-session[5150]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:04:28.140314 kernel: audit: type=1006 audit(1768586668.134:866): pid=5150 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 16 18:04:28.134000 audit[5150]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc2ed5960 a2=3 a3=0 items=0 ppid=1 pid=5150 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:28.143527 kernel: audit: type=1300 audit(1768586668.134:866): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc2ed5960 a2=3 a3=0 items=0 ppid=1 pid=5150 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:28.134000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:04:28.146647 kernel: audit: type=1327 audit(1768586668.134:866): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:04:28.150155 systemd-logind[1548]: New session 22 of user core. Jan 16 18:04:28.157388 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 16 18:04:28.161000 audit[5150]: USER_START pid=5150 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:28.167676 kernel: audit: type=1105 audit(1768586668.161:867): pid=5150 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:28.167827 kernel: audit: type=1103 audit(1768586668.166:868): pid=5154 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:28.166000 audit[5154]: CRED_ACQ pid=5154 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:28.559965 sshd[5154]: Connection closed by 68.220.241.50 port 33304 Jan 16 18:04:28.560330 sshd-session[5150]: pam_unix(sshd:session): session closed for user core Jan 16 18:04:28.563000 audit[5150]: USER_END pid=5150 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:28.563000 audit[5150]: CRED_DISP pid=5150 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:28.570886 kernel: audit: type=1106 audit(1768586668.563:869): pid=5150 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:28.572671 kernel: audit: type=1104 audit(1768586668.563:870): pid=5150 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:28.573489 systemd[1]: sshd@20-167.235.246.183:22-68.220.241.50:33304.service: Deactivated successfully. Jan 16 18:04:28.574000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-167.235.246.183:22-68.220.241.50:33304 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:28.582983 systemd[1]: session-22.scope: Deactivated successfully. Jan 16 18:04:28.585857 systemd-logind[1548]: Session 22 logged out. Waiting for processes to exit. Jan 16 18:04:28.590581 systemd-logind[1548]: Removed session 22. Jan 16 18:04:33.499220 kubelet[2796]: E0116 18:04:33.498848 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lnrx8" podUID="30371a55-b2a2-4dfe-86cf-86b9aadb477e" Jan 16 18:04:33.672761 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 18:04:33.672878 kernel: audit: type=1130 audit(1768586673.669:872): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-167.235.246.183:22-68.220.241.50:36162 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:33.669000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-167.235.246.183:22-68.220.241.50:36162 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:33.670006 systemd[1]: Started sshd@21-167.235.246.183:22-68.220.241.50:36162.service - OpenSSH per-connection server daemon (68.220.241.50:36162). Jan 16 18:04:34.231000 audit[5195]: USER_ACCT pid=5195 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:34.235679 kernel: audit: type=1101 audit(1768586674.231:873): pid=5195 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:34.235786 sshd[5195]: Accepted publickey for core from 68.220.241.50 port 36162 ssh2: RSA SHA256:9+z26MN7AwYt4+Fu/36id5j/He/wK/R0J/NBrgFPGqc Jan 16 18:04:34.236000 audit[5195]: CRED_ACQ pid=5195 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:34.238765 sshd-session[5195]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 16 18:04:34.241285 kernel: audit: type=1103 audit(1768586674.236:874): pid=5195 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:34.241379 kernel: audit: type=1006 audit(1768586674.236:875): pid=5195 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 16 18:04:34.236000 audit[5195]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcf70bb60 a2=3 a3=0 items=0 ppid=1 pid=5195 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:34.244122 kernel: audit: type=1300 audit(1768586674.236:875): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcf70bb60 a2=3 a3=0 items=0 ppid=1 pid=5195 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:04:34.236000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:04:34.245846 kernel: audit: type=1327 audit(1768586674.236:875): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 16 18:04:34.249216 systemd-logind[1548]: New session 23 of user core. Jan 16 18:04:34.255117 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 16 18:04:34.258000 audit[5195]: USER_START pid=5195 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:34.265673 kernel: audit: type=1105 audit(1768586674.258:876): pid=5195 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:34.265000 audit[5199]: CRED_ACQ pid=5199 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:34.268672 kernel: audit: type=1103 audit(1768586674.265:877): pid=5199 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:34.501944 kubelet[2796]: E0116 18:04:34.501782 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-qhvz8" podUID="9b676f4f-1816-4b2f-88ec-512b756c1b31" Jan 16 18:04:34.502921 kubelet[2796]: E0116 18:04:34.502364 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5gfbb" podUID="91c39312-4785-4237-a846-6ef23d8c4ee9" Jan 16 18:04:34.644449 sshd[5199]: Connection closed by 68.220.241.50 port 36162 Jan 16 18:04:34.645468 sshd-session[5195]: pam_unix(sshd:session): session closed for user core Jan 16 18:04:34.649000 audit[5195]: USER_END pid=5195 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:34.649000 audit[5195]: CRED_DISP pid=5195 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:34.658676 kernel: audit: type=1106 audit(1768586674.649:878): pid=5195 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:34.658858 kernel: audit: type=1104 audit(1768586674.649:879): pid=5195 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=68.220.241.50 addr=68.220.241.50 terminal=ssh res=success' Jan 16 18:04:34.662007 systemd[1]: sshd@21-167.235.246.183:22-68.220.241.50:36162.service: Deactivated successfully. Jan 16 18:04:34.661000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-167.235.246.183:22-68.220.241.50:36162 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 16 18:04:34.667369 systemd[1]: session-23.scope: Deactivated successfully. Jan 16 18:04:34.671708 systemd-logind[1548]: Session 23 logged out. Waiting for processes to exit. Jan 16 18:04:34.674076 systemd-logind[1548]: Removed session 23. Jan 16 18:04:36.501414 kubelet[2796]: E0116 18:04:36.501367 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-pnwsh" podUID="32aa7e73-83e3-4413-b798-e52a9caaa69f" Jan 16 18:04:36.503595 kubelet[2796]: E0116 18:04:36.503537 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78d49b4987-blz52" podUID="5fef7a00-b60c-4abb-8c75-26be1bbddcd8" Jan 16 18:04:42.501264 kubelet[2796]: E0116 18:04:42.501205 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76db8c587c-rvj9m" podUID="53e714ba-c9e3-42df-ae04-537a6b215f2f" Jan 16 18:04:46.499684 containerd[1571]: time="2026-01-16T18:04:46.498594308Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 16 18:04:46.888843 containerd[1571]: time="2026-01-16T18:04:46.888458166Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:04:46.892680 containerd[1571]: time="2026-01-16T18:04:46.891262938Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 16 18:04:46.892872 containerd[1571]: time="2026-01-16T18:04:46.891370818Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 16 18:04:46.893183 kubelet[2796]: E0116 18:04:46.893035 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 18:04:46.893183 kubelet[2796]: E0116 18:04:46.893126 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 16 18:04:46.901790 kubelet[2796]: E0116 18:04:46.901017 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4dchv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lnrx8_calico-system(30371a55-b2a2-4dfe-86cf-86b9aadb477e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 16 18:04:46.906968 containerd[1571]: time="2026-01-16T18:04:46.906765602Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 16 18:04:47.262789 containerd[1571]: time="2026-01-16T18:04:47.262219649Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:04:47.265335 containerd[1571]: time="2026-01-16T18:04:47.265216181Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 16 18:04:47.266771 containerd[1571]: time="2026-01-16T18:04:47.265344022Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 16 18:04:47.266925 kubelet[2796]: E0116 18:04:47.265577 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 18:04:47.266925 kubelet[2796]: E0116 18:04:47.265736 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 16 18:04:47.266925 kubelet[2796]: E0116 18:04:47.266160 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-4dchv,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-lnrx8_calico-system(30371a55-b2a2-4dfe-86cf-86b9aadb477e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 16 18:04:47.267578 kubelet[2796]: E0116 18:04:47.267468 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lnrx8" podUID="30371a55-b2a2-4dfe-86cf-86b9aadb477e" Jan 16 18:04:47.496665 kubelet[2796]: E0116 18:04:47.496588 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5gfbb" podUID="91c39312-4785-4237-a846-6ef23d8c4ee9" Jan 16 18:04:48.499713 kubelet[2796]: E0116 18:04:48.499110 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-qhvz8" podUID="9b676f4f-1816-4b2f-88ec-512b756c1b31" Jan 16 18:04:48.501897 kubelet[2796]: E0116 18:04:48.499618 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78d49b4987-blz52" podUID="5fef7a00-b60c-4abb-8c75-26be1bbddcd8" Jan 16 18:04:50.498046 containerd[1571]: time="2026-01-16T18:04:50.497720123Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 18:04:50.858417 containerd[1571]: time="2026-01-16T18:04:50.858363560Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:04:50.862275 containerd[1571]: time="2026-01-16T18:04:50.860159007Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 18:04:50.862275 containerd[1571]: time="2026-01-16T18:04:50.860216288Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 18:04:50.862802 kubelet[2796]: E0116 18:04:50.862749 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:04:50.863698 kubelet[2796]: E0116 18:04:50.863240 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:04:50.864118 kubelet[2796]: E0116 18:04:50.864045 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-t2ks5,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66b59648fd-pnwsh_calico-apiserver(32aa7e73-83e3-4413-b798-e52a9caaa69f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 18:04:50.865736 kubelet[2796]: E0116 18:04:50.865691 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-pnwsh" podUID="32aa7e73-83e3-4413-b798-e52a9caaa69f" Jan 16 18:04:55.498468 containerd[1571]: time="2026-01-16T18:04:55.498051316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 16 18:04:55.847673 containerd[1571]: time="2026-01-16T18:04:55.847392331Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:04:55.849399 containerd[1571]: time="2026-01-16T18:04:55.849321180Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 16 18:04:55.849698 containerd[1571]: time="2026-01-16T18:04:55.849462741Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 16 18:04:55.850218 kubelet[2796]: E0116 18:04:55.849824 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 18:04:55.850218 kubelet[2796]: E0116 18:04:55.849886 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 16 18:04:55.850218 kubelet[2796]: E0116 18:04:55.850039 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:a6a4e115491b41f3bacfeb616e3df811,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-ldhg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76db8c587c-rvj9m_calico-system(53e714ba-c9e3-42df-ae04-537a6b215f2f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 16 18:04:55.852971 containerd[1571]: time="2026-01-16T18:04:55.852924316Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 16 18:04:56.193760 containerd[1571]: time="2026-01-16T18:04:56.193448619Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:04:56.197197 containerd[1571]: time="2026-01-16T18:04:56.197131116Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 16 18:04:56.197507 containerd[1571]: time="2026-01-16T18:04:56.197313157Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 16 18:04:56.198150 kubelet[2796]: E0116 18:04:56.197827 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 18:04:56.198150 kubelet[2796]: E0116 18:04:56.197880 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 16 18:04:56.198150 kubelet[2796]: E0116 18:04:56.198000 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ldhg8,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-76db8c587c-rvj9m_calico-system(53e714ba-c9e3-42df-ae04-537a6b215f2f): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 16 18:04:56.199505 kubelet[2796]: E0116 18:04:56.199433 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76db8c587c-rvj9m" podUID="53e714ba-c9e3-42df-ae04-537a6b215f2f" Jan 16 18:05:00.499928 kubelet[2796]: E0116 18:05:00.499855 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-lnrx8" podUID="30371a55-b2a2-4dfe-86cf-86b9aadb477e" Jan 16 18:05:01.497288 containerd[1571]: time="2026-01-16T18:05:01.497246216Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 16 18:05:01.860073 containerd[1571]: time="2026-01-16T18:05:01.860007049Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:05:01.862458 containerd[1571]: time="2026-01-16T18:05:01.862363100Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 16 18:05:01.862826 containerd[1571]: time="2026-01-16T18:05:01.862414380Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 16 18:05:01.862941 kubelet[2796]: E0116 18:05:01.862808 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 18:05:01.862941 kubelet[2796]: E0116 18:05:01.862867 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 16 18:05:01.864078 kubelet[2796]: E0116 18:05:01.863125 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-dpcfg,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-78d49b4987-blz52_calico-system(5fef7a00-b60c-4abb-8c75-26be1bbddcd8): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 16 18:05:01.864394 containerd[1571]: time="2026-01-16T18:05:01.863439985Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 16 18:05:01.865122 kubelet[2796]: E0116 18:05:01.865072 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-78d49b4987-blz52" podUID="5fef7a00-b60c-4abb-8c75-26be1bbddcd8" Jan 16 18:05:02.201770 containerd[1571]: time="2026-01-16T18:05:02.201092866Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:05:02.207580 containerd[1571]: time="2026-01-16T18:05:02.204646043Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 16 18:05:02.207961 containerd[1571]: time="2026-01-16T18:05:02.204706203Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 16 18:05:02.209356 kubelet[2796]: E0116 18:05:02.209108 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 18:05:02.209548 kubelet[2796]: E0116 18:05:02.209524 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 16 18:05:02.209797 kubelet[2796]: E0116 18:05:02.209743 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-9fk2q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-5gfbb_calico-system(91c39312-4785-4237-a846-6ef23d8c4ee9): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 16 18:05:02.211443 kubelet[2796]: E0116 18:05:02.211367 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-5gfbb" podUID="91c39312-4785-4237-a846-6ef23d8c4ee9" Jan 16 18:05:03.501757 containerd[1571]: time="2026-01-16T18:05:03.501047024Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 16 18:05:03.505704 kubelet[2796]: E0116 18:05:03.505306 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-pnwsh" podUID="32aa7e73-83e3-4413-b798-e52a9caaa69f" Jan 16 18:05:03.882038 containerd[1571]: time="2026-01-16T18:05:03.881754047Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 16 18:05:03.884239 containerd[1571]: time="2026-01-16T18:05:03.883987618Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 16 18:05:03.884239 containerd[1571]: time="2026-01-16T18:05:03.884058378Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 16 18:05:03.884583 kubelet[2796]: E0116 18:05:03.884547 2796 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:05:03.884877 kubelet[2796]: E0116 18:05:03.884689 2796 kuberuntime_image.go:55] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 16 18:05:03.884877 kubelet[2796]: E0116 18:05:03.884821 2796 kuberuntime_manager.go:1341] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tw62r,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-66b59648fd-qhvz8_calico-apiserver(9b676f4f-1816-4b2f-88ec-512b756c1b31): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 16 18:05:03.886041 kubelet[2796]: E0116 18:05:03.886008 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-66b59648fd-qhvz8" podUID="9b676f4f-1816-4b2f-88ec-512b756c1b31" Jan 16 18:05:06.505288 systemd[1]: cri-containerd-617a81f864af3374c066117a3113770705ca6302078555631cfa1a9ffc67f940.scope: Deactivated successfully. Jan 16 18:05:06.505968 systemd[1]: cri-containerd-617a81f864af3374c066117a3113770705ca6302078555631cfa1a9ffc67f940.scope: Consumed 5.351s CPU time, 62.1M memory peak, 3.3M read from disk. Jan 16 18:05:06.507000 audit: BPF prog-id=254 op=LOAD Jan 16 18:05:06.509504 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 16 18:05:06.509598 kernel: audit: type=1334 audit(1768586706.507:881): prog-id=254 op=LOAD Jan 16 18:05:06.509645 kernel: audit: type=1334 audit(1768586706.508:882): prog-id=86 op=UNLOAD Jan 16 18:05:06.508000 audit: BPF prog-id=86 op=UNLOAD Jan 16 18:05:06.509000 audit: BPF prog-id=101 op=UNLOAD Jan 16 18:05:06.509000 audit: BPF prog-id=105 op=UNLOAD Jan 16 18:05:06.511646 kernel: audit: type=1334 audit(1768586706.509:883): prog-id=101 op=UNLOAD Jan 16 18:05:06.511793 containerd[1571]: time="2026-01-16T18:05:06.511758266Z" level=info msg="received container exit event container_id:\"617a81f864af3374c066117a3113770705ca6302078555631cfa1a9ffc67f940\" id:\"617a81f864af3374c066117a3113770705ca6302078555631cfa1a9ffc67f940\" pid:2663 exit_status:1 exited_at:{seconds:1768586706 nanos:511128263}" Jan 16 18:05:06.513849 kernel: audit: type=1334 audit(1768586706.509:884): prog-id=105 op=UNLOAD Jan 16 18:05:06.526733 kubelet[2796]: E0116 18:05:06.526123 2796 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:56300->10.0.0.2:2379: read: connection timed out" Jan 16 18:05:06.530856 systemd[1]: cri-containerd-d1de5aad5d010c71bc002602c4073f33ecfdb5af8278e4c006d0634cb8665169.scope: Deactivated successfully. Jan 16 18:05:06.532698 systemd[1]: cri-containerd-d1de5aad5d010c71bc002602c4073f33ecfdb5af8278e4c006d0634cb8665169.scope: Consumed 4.897s CPU time, 25.4M memory peak, 3.1M read from disk. Jan 16 18:05:06.534000 audit: BPF prog-id=255 op=LOAD Jan 16 18:05:06.534000 audit: BPF prog-id=81 op=UNLOAD Jan 16 18:05:06.536887 kernel: audit: type=1334 audit(1768586706.534:885): prog-id=255 op=LOAD Jan 16 18:05:06.537043 kernel: audit: type=1334 audit(1768586706.534:886): prog-id=81 op=UNLOAD Jan 16 18:05:06.538097 kernel: audit: type=1334 audit(1768586706.537:887): prog-id=96 op=UNLOAD Jan 16 18:05:06.537000 audit: BPF prog-id=96 op=UNLOAD Jan 16 18:05:06.537000 audit: BPF prog-id=100 op=UNLOAD Jan 16 18:05:06.539103 kernel: audit: type=1334 audit(1768586706.537:888): prog-id=100 op=UNLOAD Jan 16 18:05:06.543863 containerd[1571]: time="2026-01-16T18:05:06.543711342Z" level=info msg="received container exit event container_id:\"d1de5aad5d010c71bc002602c4073f33ecfdb5af8278e4c006d0634cb8665169\" id:\"d1de5aad5d010c71bc002602c4073f33ecfdb5af8278e4c006d0634cb8665169\" pid:2637 exit_status:1 exited_at:{seconds:1768586706 nanos:542424656}" Jan 16 18:05:06.563577 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-617a81f864af3374c066117a3113770705ca6302078555631cfa1a9ffc67f940-rootfs.mount: Deactivated successfully. Jan 16 18:05:06.589672 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-d1de5aad5d010c71bc002602c4073f33ecfdb5af8278e4c006d0634cb8665169-rootfs.mount: Deactivated successfully. Jan 16 18:05:07.011881 systemd[1]: cri-containerd-20f01b210028381a979aa846377fb663d2b7e57d613c82a3eb881d54d7efc135.scope: Deactivated successfully. Jan 16 18:05:07.014734 systemd[1]: cri-containerd-20f01b210028381a979aa846377fb663d2b7e57d613c82a3eb881d54d7efc135.scope: Consumed 44.789s CPU time, 99.1M memory peak. Jan 16 18:05:07.015000 audit: BPF prog-id=144 op=UNLOAD Jan 16 18:05:07.015000 audit: BPF prog-id=148 op=UNLOAD Jan 16 18:05:07.016843 kernel: audit: type=1334 audit(1768586707.015:889): prog-id=144 op=UNLOAD Jan 16 18:05:07.016928 kernel: audit: type=1334 audit(1768586707.015:890): prog-id=148 op=UNLOAD Jan 16 18:05:07.018150 containerd[1571]: time="2026-01-16T18:05:07.018069300Z" level=info msg="received container exit event container_id:\"20f01b210028381a979aa846377fb663d2b7e57d613c82a3eb881d54d7efc135\" id:\"20f01b210028381a979aa846377fb663d2b7e57d613c82a3eb881d54d7efc135\" pid:3116 exit_status:1 exited_at:{seconds:1768586707 nanos:14514122}" Jan 16 18:05:07.044374 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-20f01b210028381a979aa846377fb663d2b7e57d613c82a3eb881d54d7efc135-rootfs.mount: Deactivated successfully. Jan 16 18:05:07.445607 kubelet[2796]: I0116 18:05:07.444070 2796 scope.go:117] "RemoveContainer" containerID="617a81f864af3374c066117a3113770705ca6302078555631cfa1a9ffc67f940" Jan 16 18:05:07.445607 kubelet[2796]: I0116 18:05:07.444871 2796 scope.go:117] "RemoveContainer" containerID="d1de5aad5d010c71bc002602c4073f33ecfdb5af8278e4c006d0634cb8665169" Jan 16 18:05:07.476320 kubelet[2796]: I0116 18:05:07.476040 2796 scope.go:117] "RemoveContainer" containerID="20f01b210028381a979aa846377fb663d2b7e57d613c82a3eb881d54d7efc135" Jan 16 18:05:07.478351 containerd[1571]: time="2026-01-16T18:05:07.476926555Z" level=info msg="CreateContainer within sandbox \"83db1f42a8dfc2de052617c35f02bf9900c77e90f1c52d4e62fe70849e5ce580\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 16 18:05:07.496704 containerd[1571]: time="2026-01-16T18:05:07.495599927Z" level=info msg="Container aef33f84fd4d4f4de815fbe0d5beef652b303c098aef5ed2de895b5b78213165: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:05:07.500011 containerd[1571]: time="2026-01-16T18:05:07.499891468Z" level=info msg="CreateContainer within sandbox \"832b2b2d5ceb3be53fda973404c5c805dcd8452be743d00d12c0596554813e10\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 16 18:05:07.503704 containerd[1571]: time="2026-01-16T18:05:07.503372445Z" level=info msg="CreateContainer within sandbox \"45f77973c6a577a5d0b58f5e1ab9673cd1914487b4fc88826213e75cd01a4898\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 16 18:05:07.514959 containerd[1571]: time="2026-01-16T18:05:07.514914222Z" level=info msg="CreateContainer within sandbox \"83db1f42a8dfc2de052617c35f02bf9900c77e90f1c52d4e62fe70849e5ce580\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"aef33f84fd4d4f4de815fbe0d5beef652b303c098aef5ed2de895b5b78213165\"" Jan 16 18:05:07.516186 containerd[1571]: time="2026-01-16T18:05:07.515799826Z" level=info msg="Container b364bb37e4ef4a8349d7fee672a7513be1977aa3c88c6fec4546a8d25ad58af5: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:05:07.523412 containerd[1571]: time="2026-01-16T18:05:07.523326943Z" level=info msg="StartContainer for \"aef33f84fd4d4f4de815fbe0d5beef652b303c098aef5ed2de895b5b78213165\"" Jan 16 18:05:07.526523 containerd[1571]: time="2026-01-16T18:05:07.526440238Z" level=info msg="connecting to shim aef33f84fd4d4f4de815fbe0d5beef652b303c098aef5ed2de895b5b78213165" address="unix:///run/containerd/s/caeda84c22ae067ebbb90f4d5406a81abd9f59705b65e975f2299f19ea3d7659" protocol=ttrpc version=3 Jan 16 18:05:07.532234 containerd[1571]: time="2026-01-16T18:05:07.532170747Z" level=info msg="CreateContainer within sandbox \"832b2b2d5ceb3be53fda973404c5c805dcd8452be743d00d12c0596554813e10\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"b364bb37e4ef4a8349d7fee672a7513be1977aa3c88c6fec4546a8d25ad58af5\"" Jan 16 18:05:07.533850 containerd[1571]: time="2026-01-16T18:05:07.533810675Z" level=info msg="StartContainer for \"b364bb37e4ef4a8349d7fee672a7513be1977aa3c88c6fec4546a8d25ad58af5\"" Jan 16 18:05:07.535364 containerd[1571]: time="2026-01-16T18:05:07.535322602Z" level=info msg="Container fb4fcec1a9db59d4b87af0423af22057e704dde9f09a60aee3221ae06372b7e5: CDI devices from CRI Config.CDIDevices: []" Jan 16 18:05:07.535862 containerd[1571]: time="2026-01-16T18:05:07.535441523Z" level=info msg="connecting to shim b364bb37e4ef4a8349d7fee672a7513be1977aa3c88c6fec4546a8d25ad58af5" address="unix:///run/containerd/s/1c62d78e0d09c0020cf321c81a4297c0dba149b92c687b32fd894980653b37ce" protocol=ttrpc version=3 Jan 16 18:05:07.549574 containerd[1571]: time="2026-01-16T18:05:07.549522832Z" level=info msg="CreateContainer within sandbox \"45f77973c6a577a5d0b58f5e1ab9673cd1914487b4fc88826213e75cd01a4898\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"fb4fcec1a9db59d4b87af0423af22057e704dde9f09a60aee3221ae06372b7e5\"" Jan 16 18:05:07.551904 containerd[1571]: time="2026-01-16T18:05:07.551844843Z" level=info msg="StartContainer for \"fb4fcec1a9db59d4b87af0423af22057e704dde9f09a60aee3221ae06372b7e5\"" Jan 16 18:05:07.555989 containerd[1571]: time="2026-01-16T18:05:07.554366816Z" level=info msg="connecting to shim fb4fcec1a9db59d4b87af0423af22057e704dde9f09a60aee3221ae06372b7e5" address="unix:///run/containerd/s/ce924340609ad11cb08b8f373a7bb33affed250017f8dcbc0ea33d1abc680e32" protocol=ttrpc version=3 Jan 16 18:05:07.565928 systemd[1]: Started cri-containerd-b364bb37e4ef4a8349d7fee672a7513be1977aa3c88c6fec4546a8d25ad58af5.scope - libcontainer container b364bb37e4ef4a8349d7fee672a7513be1977aa3c88c6fec4546a8d25ad58af5. Jan 16 18:05:07.585346 systemd[1]: Started cri-containerd-aef33f84fd4d4f4de815fbe0d5beef652b303c098aef5ed2de895b5b78213165.scope - libcontainer container aef33f84fd4d4f4de815fbe0d5beef652b303c098aef5ed2de895b5b78213165. Jan 16 18:05:07.603833 systemd[1]: Started cri-containerd-fb4fcec1a9db59d4b87af0423af22057e704dde9f09a60aee3221ae06372b7e5.scope - libcontainer container fb4fcec1a9db59d4b87af0423af22057e704dde9f09a60aee3221ae06372b7e5. Jan 16 18:05:07.611000 audit: BPF prog-id=256 op=LOAD Jan 16 18:05:07.612000 audit: BPF prog-id=257 op=LOAD Jan 16 18:05:07.612000 audit[5302]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128180 a2=98 a3=0 items=0 ppid=2480 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:07.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233363462623337653465663461383334396437666565363732613735 Jan 16 18:05:07.612000 audit: BPF prog-id=257 op=UNLOAD Jan 16 18:05:07.612000 audit[5302]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2480 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:07.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233363462623337653465663461383334396437666565363732613735 Jan 16 18:05:07.612000 audit: BPF prog-id=258 op=LOAD Jan 16 18:05:07.612000 audit[5302]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001283e8 a2=98 a3=0 items=0 ppid=2480 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:07.612000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233363462623337653465663461383334396437666565363732613735 Jan 16 18:05:07.613000 audit: BPF prog-id=259 op=LOAD Jan 16 18:05:07.613000 audit[5302]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000128168 a2=98 a3=0 items=0 ppid=2480 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:07.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233363462623337653465663461383334396437666565363732613735 Jan 16 18:05:07.613000 audit: BPF prog-id=259 op=UNLOAD Jan 16 18:05:07.613000 audit[5302]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2480 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:07.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233363462623337653465663461383334396437666565363732613735 Jan 16 18:05:07.613000 audit: BPF prog-id=258 op=UNLOAD Jan 16 18:05:07.613000 audit[5302]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2480 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:07.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233363462623337653465663461383334396437666565363732613735 Jan 16 18:05:07.613000 audit: BPF prog-id=260 op=LOAD Jan 16 18:05:07.613000 audit[5302]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000128648 a2=98 a3=0 items=0 ppid=2480 pid=5302 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:07.613000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6233363462623337653465663461383334396437666565363732613735 Jan 16 18:05:07.617000 audit: BPF prog-id=261 op=LOAD Jan 16 18:05:07.620000 audit: BPF prog-id=262 op=LOAD Jan 16 18:05:07.620000 audit[5301]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8180 a2=98 a3=0 items=0 ppid=2514 pid=5301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:07.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165663333663834666434643466346465383135666265306435626565 Jan 16 18:05:07.620000 audit: BPF prog-id=262 op=UNLOAD Jan 16 18:05:07.620000 audit[5301]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2514 pid=5301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:07.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165663333663834666434643466346465383135666265306435626565 Jan 16 18:05:07.620000 audit: BPF prog-id=263 op=LOAD Jan 16 18:05:07.620000 audit[5301]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a83e8 a2=98 a3=0 items=0 ppid=2514 pid=5301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:07.620000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165663333663834666434643466346465383135666265306435626565 Jan 16 18:05:07.621000 audit: BPF prog-id=264 op=LOAD Jan 16 18:05:07.621000 audit[5301]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a8168 a2=98 a3=0 items=0 ppid=2514 pid=5301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:07.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165663333663834666434643466346465383135666265306435626565 Jan 16 18:05:07.621000 audit: BPF prog-id=264 op=UNLOAD Jan 16 18:05:07.621000 audit[5301]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2514 pid=5301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:07.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165663333663834666434643466346465383135666265306435626565 Jan 16 18:05:07.621000 audit: BPF prog-id=263 op=UNLOAD Jan 16 18:05:07.621000 audit[5301]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2514 pid=5301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:07.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165663333663834666434643466346465383135666265306435626565 Jan 16 18:05:07.621000 audit: BPF prog-id=265 op=LOAD Jan 16 18:05:07.621000 audit[5301]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a8648 a2=98 a3=0 items=0 ppid=2514 pid=5301 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:07.621000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165663333663834666434643466346465383135666265306435626565 Jan 16 18:05:07.650000 audit: BPF prog-id=266 op=LOAD Jan 16 18:05:07.651000 audit: BPF prog-id=267 op=LOAD Jan 16 18:05:07.651000 audit[5324]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=2900 pid=5324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:07.651000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662346663656331613964623539643462383761663034323361663232 Jan 16 18:05:07.653000 audit: BPF prog-id=267 op=UNLOAD Jan 16 18:05:07.653000 audit[5324]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2900 pid=5324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:07.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662346663656331613964623539643462383761663034323361663232 Jan 16 18:05:07.653000 audit: BPF prog-id=268 op=LOAD Jan 16 18:05:07.653000 audit[5324]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=2900 pid=5324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:07.653000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662346663656331613964623539643462383761663034323361663232 Jan 16 18:05:07.654000 audit: BPF prog-id=269 op=LOAD Jan 16 18:05:07.654000 audit[5324]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=2900 pid=5324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:07.654000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662346663656331613964623539643462383761663034323361663232 Jan 16 18:05:07.654000 audit: BPF prog-id=269 op=UNLOAD Jan 16 18:05:07.654000 audit[5324]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2900 pid=5324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:07.654000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662346663656331613964623539643462383761663034323361663232 Jan 16 18:05:07.655000 audit: BPF prog-id=268 op=UNLOAD Jan 16 18:05:07.655000 audit[5324]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2900 pid=5324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:07.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662346663656331613964623539643462383761663034323361663232 Jan 16 18:05:07.655000 audit: BPF prog-id=270 op=LOAD Jan 16 18:05:07.655000 audit[5324]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=2900 pid=5324 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 16 18:05:07.655000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6662346663656331613964623539643462383761663034323361663232 Jan 16 18:05:07.676372 containerd[1571]: time="2026-01-16T18:05:07.676336695Z" level=info msg="StartContainer for \"b364bb37e4ef4a8349d7fee672a7513be1977aa3c88c6fec4546a8d25ad58af5\" returns successfully" Jan 16 18:05:07.693414 containerd[1571]: time="2026-01-16T18:05:07.693333499Z" level=info msg="StartContainer for \"aef33f84fd4d4f4de815fbe0d5beef652b303c098aef5ed2de895b5b78213165\" returns successfully" Jan 16 18:05:07.708983 containerd[1571]: time="2026-01-16T18:05:07.708202412Z" level=info msg="StartContainer for \"fb4fcec1a9db59d4b87af0423af22057e704dde9f09a60aee3221ae06372b7e5\" returns successfully" Jan 16 18:05:08.502846 kubelet[2796]: E0116 18:05:08.502785 2796 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-76db8c587c-rvj9m" podUID="53e714ba-c9e3-42df-ae04-537a6b215f2f" Jan 16 18:05:10.621668 kubelet[2796]: E0116 18:05:10.614364 2796 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:56096->10.0.0.2:2379: read: connection timed out" event="&Event{ObjectMeta:{kube-apiserver-ci-4580-0-0-p-e4bb445d88.188b483bc1191e91 kube-system 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:kube-system,Name:kube-apiserver-ci-4580-0-0-p-e4bb445d88,UID:680c78a69ceec4d014b9ac2c8c9f439d,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Readiness probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:ci-4580-0-0-p-e4bb445d88,},FirstTimestamp:2026-01-16 18:05:00.152970897 +0000 UTC m=+235.770476921,LastTimestamp:2026-01-16 18:05:00.152970897 +0000 UTC m=+235.770476921,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4580-0-0-p-e4bb445d88,}" Jan 16 18:05:10.934893 kubelet[2796]: I0116 18:05:10.934191 2796 status_manager.go:890] "Failed to get status for pod" podUID="30371a55-b2a2-4dfe-86cf-86b9aadb477e" pod="calico-system/csi-node-driver-lnrx8" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.0.3:56210->10.0.0.2:2379: read: connection timed out"