Jan 20 23:51:49.439305 kernel: Booting Linux on physical CPU 0x0000000000 [0x413fd0c1] Jan 20 23:51:49.439329 kernel: Linux version 6.12.66-flatcar (build@pony-truck.infra.kinvolk.io) (aarch64-cros-linux-gnu-gcc (Gentoo Hardened 14.3.1_p20250801 p4) 14.3.1 20250801, GNU ld (Gentoo 2.45 p3) 2.45.0) #1 SMP PREEMPT Tue Jan 20 22:19:20 -00 2026 Jan 20 23:51:49.439340 kernel: KASLR enabled Jan 20 23:51:49.439346 kernel: efi: EFI v2.7 by EDK II Jan 20 23:51:49.439351 kernel: efi: SMBIOS 3.0=0x43bed0000 MEMATTR=0x43a714018 ACPI 2.0=0x438430018 RNG=0x43843e818 MEMRESERVE=0x438357218 Jan 20 23:51:49.439357 kernel: random: crng init done Jan 20 23:51:49.439364 kernel: secureboot: Secure boot disabled Jan 20 23:51:49.439370 kernel: ACPI: Early table checksum verification disabled Jan 20 23:51:49.439376 kernel: ACPI: RSDP 0x0000000438430018 000024 (v02 BOCHS ) Jan 20 23:51:49.439384 kernel: ACPI: XSDT 0x000000043843FE98 000074 (v01 BOCHS BXPC 00000001 01000013) Jan 20 23:51:49.439390 kernel: ACPI: FACP 0x000000043843FA98 000114 (v06 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 23:51:49.439396 kernel: ACPI: DSDT 0x0000000438437518 0014A2 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 23:51:49.439402 kernel: ACPI: APIC 0x000000043843FC18 0001A8 (v04 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 23:51:49.439408 kernel: ACPI: PPTT 0x000000043843D898 000114 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 23:51:49.439417 kernel: ACPI: GTDT 0x000000043843E898 000068 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 23:51:49.439424 kernel: ACPI: MCFG 0x000000043843FF98 00003C (v01 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 23:51:49.439430 kernel: ACPI: SPCR 0x000000043843E498 000050 (v02 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 23:51:49.439436 kernel: ACPI: DBG2 0x000000043843E798 000057 (v00 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 23:51:49.439443 kernel: ACPI: SRAT 0x000000043843E518 0000A0 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 23:51:49.439449 kernel: ACPI: IORT 0x000000043843E618 000080 (v03 BOCHS BXPC 00000001 BXPC 00000001) Jan 20 23:51:49.439470 kernel: ACPI: BGRT 0x000000043843E718 000038 (v01 INTEL EDK2 00000002 01000013) Jan 20 23:51:49.439477 kernel: ACPI: SPCR: console: pl011,mmio32,0x9000000,9600 Jan 20 23:51:49.439483 kernel: ACPI: Use ACPI SPCR as default console: Yes Jan 20 23:51:49.439492 kernel: ACPI: SRAT: Node 0 PXM 0 [mem 0x40000000-0x43fffffff] Jan 20 23:51:49.439498 kernel: NODE_DATA(0) allocated [mem 0x43dff1a00-0x43dff8fff] Jan 20 23:51:49.439505 kernel: Zone ranges: Jan 20 23:51:49.439511 kernel: DMA [mem 0x0000000040000000-0x00000000ffffffff] Jan 20 23:51:49.439517 kernel: DMA32 empty Jan 20 23:51:49.439524 kernel: Normal [mem 0x0000000100000000-0x000000043fffffff] Jan 20 23:51:49.439530 kernel: Device empty Jan 20 23:51:49.439536 kernel: Movable zone start for each node Jan 20 23:51:49.439543 kernel: Early memory node ranges Jan 20 23:51:49.439549 kernel: node 0: [mem 0x0000000040000000-0x000000043843ffff] Jan 20 23:51:49.439555 kernel: node 0: [mem 0x0000000438440000-0x000000043872ffff] Jan 20 23:51:49.439561 kernel: node 0: [mem 0x0000000438730000-0x000000043bbfffff] Jan 20 23:51:49.439569 kernel: node 0: [mem 0x000000043bc00000-0x000000043bfdffff] Jan 20 23:51:49.439575 kernel: node 0: [mem 0x000000043bfe0000-0x000000043fffffff] Jan 20 23:51:49.439582 kernel: Initmem setup node 0 [mem 0x0000000040000000-0x000000043fffffff] Jan 20 23:51:49.439588 kernel: cma: Reserved 16 MiB at 0x00000000ff000000 on node -1 Jan 20 23:51:49.439595 kernel: psci: probing for conduit method from ACPI. Jan 20 23:51:49.439604 kernel: psci: PSCIv1.3 detected in firmware. Jan 20 23:51:49.439612 kernel: psci: Using standard PSCI v0.2 function IDs Jan 20 23:51:49.439619 kernel: psci: Trusted OS migration not required Jan 20 23:51:49.439626 kernel: psci: SMC Calling Convention v1.1 Jan 20 23:51:49.439633 kernel: smccc: KVM: hypervisor services detected (0x00000000 0x00000000 0x00000000 0x00000003) Jan 20 23:51:49.439639 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x0 -> Node 0 Jan 20 23:51:49.439646 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x1 -> Node 0 Jan 20 23:51:49.439653 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x2 -> Node 0 Jan 20 23:51:49.439660 kernel: ACPI: NUMA: SRAT: PXM 0 -> MPIDR 0x3 -> Node 0 Jan 20 23:51:49.439668 kernel: percpu: Embedded 33 pages/cpu s98200 r8192 d28776 u135168 Jan 20 23:51:49.439674 kernel: pcpu-alloc: s98200 r8192 d28776 u135168 alloc=33*4096 Jan 20 23:51:49.439681 kernel: pcpu-alloc: [0] 0 [0] 1 [0] 2 [0] 3 Jan 20 23:51:49.439688 kernel: Detected PIPT I-cache on CPU0 Jan 20 23:51:49.439695 kernel: CPU features: detected: GIC system register CPU interface Jan 20 23:51:49.439702 kernel: CPU features: detected: Spectre-v4 Jan 20 23:51:49.439708 kernel: CPU features: detected: Spectre-BHB Jan 20 23:51:49.439715 kernel: CPU features: kernel page table isolation forced ON by KASLR Jan 20 23:51:49.439721 kernel: CPU features: detected: Kernel page table isolation (KPTI) Jan 20 23:51:49.439728 kernel: CPU features: detected: ARM erratum 1418040 Jan 20 23:51:49.439735 kernel: CPU features: detected: SSBS not fully self-synchronizing Jan 20 23:51:49.439743 kernel: alternatives: applying boot alternatives Jan 20 23:51:49.439751 kernel: Kernel command line: BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=openstack verity.usrhash=3c423a3ed4865abab898483a94535823dbc3dcf7b9fc4db9a9e44dcb3b3370eb Jan 20 23:51:49.439758 kernel: Dentry cache hash table entries: 2097152 (order: 12, 16777216 bytes, linear) Jan 20 23:51:49.439765 kernel: Inode-cache hash table entries: 1048576 (order: 11, 8388608 bytes, linear) Jan 20 23:51:49.439771 kernel: Fallback order for Node 0: 0 Jan 20 23:51:49.439778 kernel: Built 1 zonelists, mobility grouping on. Total pages: 4194304 Jan 20 23:51:49.439785 kernel: Policy zone: Normal Jan 20 23:51:49.439792 kernel: mem auto-init: stack:off, heap alloc:off, heap free:off Jan 20 23:51:49.439798 kernel: software IO TLB: area num 4. Jan 20 23:51:49.439805 kernel: software IO TLB: mapped [mem 0x00000000fb000000-0x00000000ff000000] (64MB) Jan 20 23:51:49.439813 kernel: SLUB: HWalign=64, Order=0-3, MinObjects=0, CPUs=4, Nodes=1 Jan 20 23:51:49.439820 kernel: rcu: Preemptible hierarchical RCU implementation. Jan 20 23:51:49.439828 kernel: rcu: RCU event tracing is enabled. Jan 20 23:51:49.439835 kernel: rcu: RCU restricting CPUs from NR_CPUS=512 to nr_cpu_ids=4. Jan 20 23:51:49.439842 kernel: Trampoline variant of Tasks RCU enabled. Jan 20 23:51:49.439848 kernel: Tracing variant of Tasks RCU enabled. Jan 20 23:51:49.439855 kernel: rcu: RCU calculated value of scheduler-enlistment delay is 100 jiffies. Jan 20 23:51:49.439862 kernel: rcu: Adjusting geometry for rcu_fanout_leaf=16, nr_cpu_ids=4 Jan 20 23:51:49.439869 kernel: RCU Tasks: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 23:51:49.439876 kernel: RCU Tasks Trace: Setting shift to 2 and lim to 1 rcu_task_cb_adjust=1 rcu_task_cpu_ids=4. Jan 20 23:51:49.439882 kernel: NR_IRQS: 64, nr_irqs: 64, preallocated irqs: 0 Jan 20 23:51:49.439890 kernel: GICv3: 256 SPIs implemented Jan 20 23:51:49.439897 kernel: GICv3: 0 Extended SPIs implemented Jan 20 23:51:49.439904 kernel: Root IRQ handler: gic_handle_irq Jan 20 23:51:49.439910 kernel: GICv3: GICv3 features: 16 PPIs, DirectLPI Jan 20 23:51:49.439917 kernel: GICv3: GICD_CTRL.DS=1, SCR_EL3.FIQ=0 Jan 20 23:51:49.439924 kernel: GICv3: CPU0: found redistributor 0 region 0:0x00000000080a0000 Jan 20 23:51:49.439931 kernel: ITS [mem 0x08080000-0x0809ffff] Jan 20 23:51:49.439938 kernel: ITS@0x0000000008080000: allocated 8192 Devices @100110000 (indirect, esz 8, psz 64K, shr 1) Jan 20 23:51:49.439945 kernel: ITS@0x0000000008080000: allocated 8192 Interrupt Collections @100120000 (flat, esz 8, psz 64K, shr 1) Jan 20 23:51:49.439951 kernel: GICv3: using LPI property table @0x0000000100130000 Jan 20 23:51:49.439958 kernel: GICv3: CPU0: using allocated LPI pending table @0x0000000100140000 Jan 20 23:51:49.439965 kernel: rcu: srcu_init: Setting srcu_struct sizes based on contention. Jan 20 23:51:49.439973 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 20 23:51:49.439980 kernel: arch_timer: cp15 timer(s) running at 25.00MHz (virt). Jan 20 23:51:49.439986 kernel: clocksource: arch_sys_counter: mask: 0xffffffffffffff max_cycles: 0x5c40939b5, max_idle_ns: 440795202646 ns Jan 20 23:51:49.439993 kernel: sched_clock: 56 bits at 25MHz, resolution 40ns, wraps every 4398046511100ns Jan 20 23:51:49.440000 kernel: arm-pv: using stolen time PV Jan 20 23:51:49.440008 kernel: Console: colour dummy device 80x25 Jan 20 23:51:49.440015 kernel: ACPI: Core revision 20240827 Jan 20 23:51:49.440022 kernel: Calibrating delay loop (skipped), value calculated using timer frequency.. 50.00 BogoMIPS (lpj=25000) Jan 20 23:51:49.440031 kernel: pid_max: default: 32768 minimum: 301 Jan 20 23:51:49.440038 kernel: LSM: initializing lsm=lockdown,capability,landlock,selinux,ima Jan 20 23:51:49.440045 kernel: landlock: Up and running. Jan 20 23:51:49.440052 kernel: SELinux: Initializing. Jan 20 23:51:49.440059 kernel: Mount-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 23:51:49.440066 kernel: Mountpoint-cache hash table entries: 32768 (order: 6, 262144 bytes, linear) Jan 20 23:51:49.440073 kernel: rcu: Hierarchical SRCU implementation. Jan 20 23:51:49.440081 kernel: rcu: Max phase no-delay instances is 400. Jan 20 23:51:49.440089 kernel: Timer migration: 1 hierarchy levels; 8 children per group; 1 crossnode level Jan 20 23:51:49.440097 kernel: Remapping and enabling EFI services. Jan 20 23:51:49.440104 kernel: smp: Bringing up secondary CPUs ... Jan 20 23:51:49.440111 kernel: Detected PIPT I-cache on CPU1 Jan 20 23:51:49.440118 kernel: GICv3: CPU1: found redistributor 1 region 0:0x00000000080c0000 Jan 20 23:51:49.440125 kernel: GICv3: CPU1: using allocated LPI pending table @0x0000000100150000 Jan 20 23:51:49.440132 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 20 23:51:49.440142 kernel: CPU1: Booted secondary processor 0x0000000001 [0x413fd0c1] Jan 20 23:51:49.440149 kernel: Detected PIPT I-cache on CPU2 Jan 20 23:51:49.440161 kernel: GICv3: CPU2: found redistributor 2 region 0:0x00000000080e0000 Jan 20 23:51:49.440171 kernel: GICv3: CPU2: using allocated LPI pending table @0x0000000100160000 Jan 20 23:51:49.440179 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 20 23:51:49.440186 kernel: CPU2: Booted secondary processor 0x0000000002 [0x413fd0c1] Jan 20 23:51:49.440193 kernel: Detected PIPT I-cache on CPU3 Jan 20 23:51:49.440201 kernel: GICv3: CPU3: found redistributor 3 region 0:0x0000000008100000 Jan 20 23:51:49.440210 kernel: GICv3: CPU3: using allocated LPI pending table @0x0000000100170000 Jan 20 23:51:49.440217 kernel: arch_timer: Enabling local workaround for ARM erratum 1418040 Jan 20 23:51:49.440225 kernel: CPU3: Booted secondary processor 0x0000000003 [0x413fd0c1] Jan 20 23:51:49.440232 kernel: smp: Brought up 1 node, 4 CPUs Jan 20 23:51:49.440239 kernel: SMP: Total of 4 processors activated. Jan 20 23:51:49.440247 kernel: CPU: All CPU(s) started at EL1 Jan 20 23:51:49.440255 kernel: CPU features: detected: 32-bit EL0 Support Jan 20 23:51:49.440263 kernel: CPU features: detected: Data cache clean to the PoU not required for I/D coherence Jan 20 23:51:49.440271 kernel: CPU features: detected: Common not Private translations Jan 20 23:51:49.440278 kernel: CPU features: detected: CRC32 instructions Jan 20 23:51:49.440286 kernel: CPU features: detected: Enhanced Virtualization Traps Jan 20 23:51:49.440293 kernel: CPU features: detected: RCpc load-acquire (LDAPR) Jan 20 23:51:49.440301 kernel: CPU features: detected: LSE atomic instructions Jan 20 23:51:49.440310 kernel: CPU features: detected: Privileged Access Never Jan 20 23:51:49.440317 kernel: CPU features: detected: RAS Extension Support Jan 20 23:51:49.440324 kernel: CPU features: detected: Speculative Store Bypassing Safe (SSBS) Jan 20 23:51:49.440332 kernel: alternatives: applying system-wide alternatives Jan 20 23:51:49.440339 kernel: CPU features: detected: Hardware dirty bit management on CPU0-3 Jan 20 23:51:49.440347 kernel: Memory: 16324496K/16777216K available (11200K kernel code, 2458K rwdata, 9088K rodata, 12480K init, 1038K bss, 430000K reserved, 16384K cma-reserved) Jan 20 23:51:49.440355 kernel: devtmpfs: initialized Jan 20 23:51:49.440362 kernel: clocksource: jiffies: mask: 0xffffffff max_cycles: 0xffffffff, max_idle_ns: 1911260446275000 ns Jan 20 23:51:49.440371 kernel: futex hash table entries: 1024 (order: 4, 65536 bytes, linear) Jan 20 23:51:49.440378 kernel: 2G module region forced by RANDOMIZE_MODULE_REGION_FULL Jan 20 23:51:49.440386 kernel: 0 pages in range for non-PLT usage Jan 20 23:51:49.440393 kernel: 515168 pages in range for PLT usage Jan 20 23:51:49.440400 kernel: pinctrl core: initialized pinctrl subsystem Jan 20 23:51:49.440408 kernel: SMBIOS 3.0.0 present. Jan 20 23:51:49.440415 kernel: DMI: QEMU KVM Virtual Machine, BIOS 0.0.0 02/06/2015 Jan 20 23:51:49.440423 kernel: DMI: Memory slots populated: 1/1 Jan 20 23:51:49.440431 kernel: NET: Registered PF_NETLINK/PF_ROUTE protocol family Jan 20 23:51:49.440438 kernel: DMA: preallocated 2048 KiB GFP_KERNEL pool for atomic allocations Jan 20 23:51:49.440446 kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA pool for atomic allocations Jan 20 23:51:49.440463 kernel: DMA: preallocated 2048 KiB GFP_KERNEL|GFP_DMA32 pool for atomic allocations Jan 20 23:51:49.440471 kernel: audit: initializing netlink subsys (disabled) Jan 20 23:51:49.440479 kernel: audit: type=2000 audit(0.040:1): state=initialized audit_enabled=0 res=1 Jan 20 23:51:49.440488 kernel: thermal_sys: Registered thermal governor 'step_wise' Jan 20 23:51:49.440496 kernel: cpuidle: using governor menu Jan 20 23:51:49.440503 kernel: hw-breakpoint: found 6 breakpoint and 4 watchpoint registers. Jan 20 23:51:49.440511 kernel: ASID allocator initialised with 32768 entries Jan 20 23:51:49.440518 kernel: acpiphp: ACPI Hot Plug PCI Controller Driver version: 0.5 Jan 20 23:51:49.440526 kernel: Serial: AMBA PL011 UART driver Jan 20 23:51:49.440533 kernel: HugeTLB: registered 1.00 GiB page size, pre-allocated 0 pages Jan 20 23:51:49.440543 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 1.00 GiB page Jan 20 23:51:49.440550 kernel: HugeTLB: registered 32.0 MiB page size, pre-allocated 0 pages Jan 20 23:51:49.440558 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 32.0 MiB page Jan 20 23:51:49.440565 kernel: HugeTLB: registered 2.00 MiB page size, pre-allocated 0 pages Jan 20 23:51:49.440573 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 2.00 MiB page Jan 20 23:51:49.440580 kernel: HugeTLB: registered 64.0 KiB page size, pre-allocated 0 pages Jan 20 23:51:49.440588 kernel: HugeTLB: 0 KiB vmemmap can be freed for a 64.0 KiB page Jan 20 23:51:49.440595 kernel: ACPI: Added _OSI(Module Device) Jan 20 23:51:49.440604 kernel: ACPI: Added _OSI(Processor Device) Jan 20 23:51:49.440612 kernel: ACPI: Added _OSI(Processor Aggregator Device) Jan 20 23:51:49.440619 kernel: ACPI: 1 ACPI AML tables successfully acquired and loaded Jan 20 23:51:49.440626 kernel: ACPI: Interpreter enabled Jan 20 23:51:49.440634 kernel: ACPI: Using GIC for interrupt routing Jan 20 23:51:49.440641 kernel: ACPI: MCFG table detected, 1 entries Jan 20 23:51:49.440648 kernel: ACPI: CPU0 has been hot-added Jan 20 23:51:49.440657 kernel: ACPI: CPU1 has been hot-added Jan 20 23:51:49.440665 kernel: ACPI: CPU2 has been hot-added Jan 20 23:51:49.440672 kernel: ACPI: CPU3 has been hot-added Jan 20 23:51:49.440679 kernel: ARMH0011:00: ttyAMA0 at MMIO 0x9000000 (irq = 12, base_baud = 0) is a SBSA Jan 20 23:51:49.440687 kernel: printk: legacy console [ttyAMA0] enabled Jan 20 23:51:49.440695 kernel: ACPI: PCI Root Bridge [PCI0] (domain 0000 [bus 00-ff]) Jan 20 23:51:49.440885 kernel: acpi PNP0A08:00: _OSC: OS supports [ExtendedConfig ASPM ClockPM Segments MSI HPX-Type3] Jan 20 23:51:49.440982 kernel: acpi PNP0A08:00: _OSC: platform does not support [LTR] Jan 20 23:51:49.441067 kernel: acpi PNP0A08:00: _OSC: OS now controls [PCIeHotplug PME AER PCIeCapability] Jan 20 23:51:49.441149 kernel: acpi PNP0A08:00: ECAM area [mem 0x4010000000-0x401fffffff] reserved by PNP0C02:00 Jan 20 23:51:49.441231 kernel: acpi PNP0A08:00: ECAM at [mem 0x4010000000-0x401fffffff] for [bus 00-ff] Jan 20 23:51:49.441240 kernel: ACPI: Remapped I/O 0x000000003eff0000 to [io 0x0000-0xffff window] Jan 20 23:51:49.441248 kernel: PCI host bridge to bus 0000:00 Jan 20 23:51:49.441340 kernel: pci_bus 0000:00: root bus resource [mem 0x10000000-0x3efeffff window] Jan 20 23:51:49.441415 kernel: pci_bus 0000:00: root bus resource [io 0x0000-0xffff window] Jan 20 23:51:49.441525 kernel: pci_bus 0000:00: root bus resource [mem 0x8000000000-0xffffffffff window] Jan 20 23:51:49.441604 kernel: pci_bus 0000:00: root bus resource [bus 00-ff] Jan 20 23:51:49.441703 kernel: pci 0000:00:00.0: [1b36:0008] type 00 class 0x060000 conventional PCI endpoint Jan 20 23:51:49.441808 kernel: pci 0000:00:01.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:51:49.441898 kernel: pci 0000:00:01.0: BAR 0 [mem 0x125a0000-0x125a0fff] Jan 20 23:51:49.441983 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 20 23:51:49.442066 kernel: pci 0000:00:01.0: bridge window [mem 0x12400000-0x124fffff] Jan 20 23:51:49.442158 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80000fffff 64bit pref] Jan 20 23:51:49.442251 kernel: pci 0000:00:01.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:51:49.442342 kernel: pci 0000:00:01.1: BAR 0 [mem 0x1259f000-0x1259ffff] Jan 20 23:51:49.442423 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jan 20 23:51:49.442533 kernel: pci 0000:00:01.1: bridge window [mem 0x12300000-0x123fffff] Jan 20 23:51:49.442628 kernel: pci 0000:00:01.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:51:49.442711 kernel: pci 0000:00:01.2: BAR 0 [mem 0x1259e000-0x1259efff] Jan 20 23:51:49.442796 kernel: pci 0000:00:01.2: PCI bridge to [bus 03] Jan 20 23:51:49.442878 kernel: pci 0000:00:01.2: bridge window [mem 0x12200000-0x122fffff] Jan 20 23:51:49.442959 kernel: pci 0000:00:01.2: bridge window [mem 0x8000100000-0x80001fffff 64bit pref] Jan 20 23:51:49.443048 kernel: pci 0000:00:01.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:51:49.443130 kernel: pci 0000:00:01.3: BAR 0 [mem 0x1259d000-0x1259dfff] Jan 20 23:51:49.443211 kernel: pci 0000:00:01.3: PCI bridge to [bus 04] Jan 20 23:51:49.443294 kernel: pci 0000:00:01.3: bridge window [mem 0x8000200000-0x80002fffff 64bit pref] Jan 20 23:51:49.443384 kernel: pci 0000:00:01.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:51:49.443480 kernel: pci 0000:00:01.4: BAR 0 [mem 0x1259c000-0x1259cfff] Jan 20 23:51:49.443566 kernel: pci 0000:00:01.4: PCI bridge to [bus 05] Jan 20 23:51:49.443650 kernel: pci 0000:00:01.4: bridge window [mem 0x12100000-0x121fffff] Jan 20 23:51:49.443745 kernel: pci 0000:00:01.4: bridge window [mem 0x8000300000-0x80003fffff 64bit pref] Jan 20 23:51:49.443850 kernel: pci 0000:00:01.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:51:49.443943 kernel: pci 0000:00:01.5: BAR 0 [mem 0x1259b000-0x1259bfff] Jan 20 23:51:49.444037 kernel: pci 0000:00:01.5: PCI bridge to [bus 06] Jan 20 23:51:49.444125 kernel: pci 0000:00:01.5: bridge window [mem 0x12000000-0x120fffff] Jan 20 23:51:49.444220 kernel: pci 0000:00:01.5: bridge window [mem 0x8000400000-0x80004fffff 64bit pref] Jan 20 23:51:49.444308 kernel: pci 0000:00:01.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:51:49.444424 kernel: pci 0000:00:01.6: BAR 0 [mem 0x1259a000-0x1259afff] Jan 20 23:51:49.444539 kernel: pci 0000:00:01.6: PCI bridge to [bus 07] Jan 20 23:51:49.444632 kernel: pci 0000:00:01.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:51:49.444718 kernel: pci 0000:00:01.7: BAR 0 [mem 0x12599000-0x12599fff] Jan 20 23:51:49.444800 kernel: pci 0000:00:01.7: PCI bridge to [bus 08] Jan 20 23:51:49.444916 kernel: pci 0000:00:02.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:51:49.445007 kernel: pci 0000:00:02.0: BAR 0 [mem 0x12598000-0x12598fff] Jan 20 23:51:49.445090 kernel: pci 0000:00:02.0: PCI bridge to [bus 09] Jan 20 23:51:49.445179 kernel: pci 0000:00:02.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:51:49.445260 kernel: pci 0000:00:02.1: BAR 0 [mem 0x12597000-0x12597fff] Jan 20 23:51:49.445344 kernel: pci 0000:00:02.1: PCI bridge to [bus 0a] Jan 20 23:51:49.445431 kernel: pci 0000:00:02.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:51:49.445538 kernel: pci 0000:00:02.2: BAR 0 [mem 0x12596000-0x12596fff] Jan 20 23:51:49.445628 kernel: pci 0000:00:02.2: PCI bridge to [bus 0b] Jan 20 23:51:49.445721 kernel: pci 0000:00:02.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:51:49.445803 kernel: pci 0000:00:02.3: BAR 0 [mem 0x12595000-0x12595fff] Jan 20 23:51:49.445889 kernel: pci 0000:00:02.3: PCI bridge to [bus 0c] Jan 20 23:51:49.445992 kernel: pci 0000:00:02.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:51:49.446077 kernel: pci 0000:00:02.4: BAR 0 [mem 0x12594000-0x12594fff] Jan 20 23:51:49.446159 kernel: pci 0000:00:02.4: PCI bridge to [bus 0d] Jan 20 23:51:49.446249 kernel: pci 0000:00:02.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:51:49.446332 kernel: pci 0000:00:02.5: BAR 0 [mem 0x12593000-0x12593fff] Jan 20 23:51:49.446417 kernel: pci 0000:00:02.5: PCI bridge to [bus 0e] Jan 20 23:51:49.446527 kernel: pci 0000:00:02.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:51:49.446614 kernel: pci 0000:00:02.6: BAR 0 [mem 0x12592000-0x12592fff] Jan 20 23:51:49.446696 kernel: pci 0000:00:02.6: PCI bridge to [bus 0f] Jan 20 23:51:49.446788 kernel: pci 0000:00:02.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:51:49.446874 kernel: pci 0000:00:02.7: BAR 0 [mem 0x12591000-0x12591fff] Jan 20 23:51:49.446956 kernel: pci 0000:00:02.7: PCI bridge to [bus 10] Jan 20 23:51:49.447044 kernel: pci 0000:00:03.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:51:49.447127 kernel: pci 0000:00:03.0: BAR 0 [mem 0x12590000-0x12590fff] Jan 20 23:51:49.447209 kernel: pci 0000:00:03.0: PCI bridge to [bus 11] Jan 20 23:51:49.447302 kernel: pci 0000:00:03.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:51:49.447389 kernel: pci 0000:00:03.1: BAR 0 [mem 0x1258f000-0x1258ffff] Jan 20 23:51:49.447489 kernel: pci 0000:00:03.1: PCI bridge to [bus 12] Jan 20 23:51:49.447578 kernel: pci 0000:00:03.1: bridge window [io 0xf000-0xffff] Jan 20 23:51:49.447662 kernel: pci 0000:00:03.1: bridge window [mem 0x11e00000-0x11ffffff] Jan 20 23:51:49.447754 kernel: pci 0000:00:03.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:51:49.447854 kernel: pci 0000:00:03.2: BAR 0 [mem 0x1258e000-0x1258efff] Jan 20 23:51:49.447966 kernel: pci 0000:00:03.2: PCI bridge to [bus 13] Jan 20 23:51:49.448076 kernel: pci 0000:00:03.2: bridge window [io 0xe000-0xefff] Jan 20 23:51:49.448162 kernel: pci 0000:00:03.2: bridge window [mem 0x11c00000-0x11dfffff] Jan 20 23:51:49.448253 kernel: pci 0000:00:03.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:51:49.448336 kernel: pci 0000:00:03.3: BAR 0 [mem 0x1258d000-0x1258dfff] Jan 20 23:51:49.448419 kernel: pci 0000:00:03.3: PCI bridge to [bus 14] Jan 20 23:51:49.448524 kernel: pci 0000:00:03.3: bridge window [io 0xd000-0xdfff] Jan 20 23:51:49.448613 kernel: pci 0000:00:03.3: bridge window [mem 0x11a00000-0x11bfffff] Jan 20 23:51:49.448711 kernel: pci 0000:00:03.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:51:49.448794 kernel: pci 0000:00:03.4: BAR 0 [mem 0x1258c000-0x1258cfff] Jan 20 23:51:49.448894 kernel: pci 0000:00:03.4: PCI bridge to [bus 15] Jan 20 23:51:49.448980 kernel: pci 0000:00:03.4: bridge window [io 0xc000-0xcfff] Jan 20 23:51:49.449066 kernel: pci 0000:00:03.4: bridge window [mem 0x11800000-0x119fffff] Jan 20 23:51:49.449158 kernel: pci 0000:00:03.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:51:49.449240 kernel: pci 0000:00:03.5: BAR 0 [mem 0x1258b000-0x1258bfff] Jan 20 23:51:49.449322 kernel: pci 0000:00:03.5: PCI bridge to [bus 16] Jan 20 23:51:49.449403 kernel: pci 0000:00:03.5: bridge window [io 0xb000-0xbfff] Jan 20 23:51:49.449510 kernel: pci 0000:00:03.5: bridge window [mem 0x11600000-0x117fffff] Jan 20 23:51:49.449642 kernel: pci 0000:00:03.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:51:49.449730 kernel: pci 0000:00:03.6: BAR 0 [mem 0x1258a000-0x1258afff] Jan 20 23:51:49.449814 kernel: pci 0000:00:03.6: PCI bridge to [bus 17] Jan 20 23:51:49.449896 kernel: pci 0000:00:03.6: bridge window [io 0xa000-0xafff] Jan 20 23:51:49.449978 kernel: pci 0000:00:03.6: bridge window [mem 0x11400000-0x115fffff] Jan 20 23:51:49.450070 kernel: pci 0000:00:03.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:51:49.450156 kernel: pci 0000:00:03.7: BAR 0 [mem 0x12589000-0x12589fff] Jan 20 23:51:49.450238 kernel: pci 0000:00:03.7: PCI bridge to [bus 18] Jan 20 23:51:49.450319 kernel: pci 0000:00:03.7: bridge window [io 0x9000-0x9fff] Jan 20 23:51:49.450401 kernel: pci 0000:00:03.7: bridge window [mem 0x11200000-0x113fffff] Jan 20 23:51:49.450520 kernel: pci 0000:00:04.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:51:49.450621 kernel: pci 0000:00:04.0: BAR 0 [mem 0x12588000-0x12588fff] Jan 20 23:51:49.450709 kernel: pci 0000:00:04.0: PCI bridge to [bus 19] Jan 20 23:51:49.450791 kernel: pci 0000:00:04.0: bridge window [io 0x8000-0x8fff] Jan 20 23:51:49.450875 kernel: pci 0000:00:04.0: bridge window [mem 0x11000000-0x111fffff] Jan 20 23:51:49.450964 kernel: pci 0000:00:04.1: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:51:49.451048 kernel: pci 0000:00:04.1: BAR 0 [mem 0x12587000-0x12587fff] Jan 20 23:51:49.451131 kernel: pci 0000:00:04.1: PCI bridge to [bus 1a] Jan 20 23:51:49.451215 kernel: pci 0000:00:04.1: bridge window [io 0x7000-0x7fff] Jan 20 23:51:49.451298 kernel: pci 0000:00:04.1: bridge window [mem 0x10e00000-0x10ffffff] Jan 20 23:51:49.451394 kernel: pci 0000:00:04.2: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:51:49.451496 kernel: pci 0000:00:04.2: BAR 0 [mem 0x12586000-0x12586fff] Jan 20 23:51:49.451610 kernel: pci 0000:00:04.2: PCI bridge to [bus 1b] Jan 20 23:51:49.451699 kernel: pci 0000:00:04.2: bridge window [io 0x6000-0x6fff] Jan 20 23:51:49.451788 kernel: pci 0000:00:04.2: bridge window [mem 0x10c00000-0x10dfffff] Jan 20 23:51:49.451885 kernel: pci 0000:00:04.3: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:51:49.451974 kernel: pci 0000:00:04.3: BAR 0 [mem 0x12585000-0x12585fff] Jan 20 23:51:49.452078 kernel: pci 0000:00:04.3: PCI bridge to [bus 1c] Jan 20 23:51:49.452163 kernel: pci 0000:00:04.3: bridge window [io 0x5000-0x5fff] Jan 20 23:51:49.452273 kernel: pci 0000:00:04.3: bridge window [mem 0x10a00000-0x10bfffff] Jan 20 23:51:49.452370 kernel: pci 0000:00:04.4: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:51:49.452466 kernel: pci 0000:00:04.4: BAR 0 [mem 0x12584000-0x12584fff] Jan 20 23:51:49.452561 kernel: pci 0000:00:04.4: PCI bridge to [bus 1d] Jan 20 23:51:49.452648 kernel: pci 0000:00:04.4: bridge window [io 0x4000-0x4fff] Jan 20 23:51:49.452731 kernel: pci 0000:00:04.4: bridge window [mem 0x10800000-0x109fffff] Jan 20 23:51:49.452835 kernel: pci 0000:00:04.5: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:51:49.452924 kernel: pci 0000:00:04.5: BAR 0 [mem 0x12583000-0x12583fff] Jan 20 23:51:49.453006 kernel: pci 0000:00:04.5: PCI bridge to [bus 1e] Jan 20 23:51:49.453095 kernel: pci 0000:00:04.5: bridge window [io 0x3000-0x3fff] Jan 20 23:51:49.453179 kernel: pci 0000:00:04.5: bridge window [mem 0x10600000-0x107fffff] Jan 20 23:51:49.453271 kernel: pci 0000:00:04.6: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:51:49.453353 kernel: pci 0000:00:04.6: BAR 0 [mem 0x12582000-0x12582fff] Jan 20 23:51:49.453437 kernel: pci 0000:00:04.6: PCI bridge to [bus 1f] Jan 20 23:51:49.453539 kernel: pci 0000:00:04.6: bridge window [io 0x2000-0x2fff] Jan 20 23:51:49.453625 kernel: pci 0000:00:04.6: bridge window [mem 0x10400000-0x105fffff] Jan 20 23:51:49.453721 kernel: pci 0000:00:04.7: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:51:49.453821 kernel: pci 0000:00:04.7: BAR 0 [mem 0x12581000-0x12581fff] Jan 20 23:51:49.453914 kernel: pci 0000:00:04.7: PCI bridge to [bus 20] Jan 20 23:51:49.453997 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x1fff] Jan 20 23:51:49.454079 kernel: pci 0000:00:04.7: bridge window [mem 0x10200000-0x103fffff] Jan 20 23:51:49.454171 kernel: pci 0000:00:05.0: [1b36:000c] type 01 class 0x060400 PCIe Root Port Jan 20 23:51:49.454258 kernel: pci 0000:00:05.0: BAR 0 [mem 0x12580000-0x12580fff] Jan 20 23:51:49.454339 kernel: pci 0000:00:05.0: PCI bridge to [bus 21] Jan 20 23:51:49.454421 kernel: pci 0000:00:05.0: bridge window [io 0x0000-0x0fff] Jan 20 23:51:49.454529 kernel: pci 0000:00:05.0: bridge window [mem 0x10000000-0x101fffff] Jan 20 23:51:49.454651 kernel: pci 0000:01:00.0: [1af4:1041] type 00 class 0x020000 PCIe Endpoint Jan 20 23:51:49.454747 kernel: pci 0000:01:00.0: BAR 1 [mem 0x12400000-0x12400fff] Jan 20 23:51:49.454836 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref] Jan 20 23:51:49.454921 kernel: pci 0000:01:00.0: ROM [mem 0xfff80000-0xffffffff pref] Jan 20 23:51:49.455021 kernel: pci 0000:02:00.0: [1b36:000d] type 00 class 0x0c0330 PCIe Endpoint Jan 20 23:51:49.455107 kernel: pci 0000:02:00.0: BAR 0 [mem 0x12300000-0x12303fff 64bit] Jan 20 23:51:49.455200 kernel: pci 0000:03:00.0: [1af4:1042] type 00 class 0x010000 PCIe Endpoint Jan 20 23:51:49.455288 kernel: pci 0000:03:00.0: BAR 1 [mem 0x12200000-0x12200fff] Jan 20 23:51:49.455374 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000100000-0x8000103fff 64bit pref] Jan 20 23:51:49.455485 kernel: pci 0000:04:00.0: [1af4:1045] type 00 class 0x00ff00 PCIe Endpoint Jan 20 23:51:49.455581 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000200000-0x8000203fff 64bit pref] Jan 20 23:51:49.455681 kernel: pci 0000:05:00.0: [1af4:1044] type 00 class 0x00ff00 PCIe Endpoint Jan 20 23:51:49.455776 kernel: pci 0000:05:00.0: BAR 1 [mem 0x12100000-0x12100fff] Jan 20 23:51:49.455864 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000300000-0x8000303fff 64bit pref] Jan 20 23:51:49.455959 kernel: pci 0000:06:00.0: [1af4:1050] type 00 class 0x038000 PCIe Endpoint Jan 20 23:51:49.456045 kernel: pci 0000:06:00.0: BAR 1 [mem 0x12000000-0x12000fff] Jan 20 23:51:49.456133 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref] Jan 20 23:51:49.456224 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x0fff] to [bus 01] add_size 1000 Jan 20 23:51:49.456314 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 01] add_size 100000 add_align 100000 Jan 20 23:51:49.456398 kernel: pci 0000:00:01.0: bridge window [mem 0x00100000-0x001fffff] to [bus 01] add_size 100000 add_align 100000 Jan 20 23:51:49.456497 kernel: pci 0000:00:01.1: bridge window [io 0x1000-0x0fff] to [bus 02] add_size 1000 Jan 20 23:51:49.456584 kernel: pci 0000:00:01.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 02] add_size 200000 add_align 100000 Jan 20 23:51:49.456670 kernel: pci 0000:00:01.1: bridge window [mem 0x00100000-0x001fffff] to [bus 02] add_size 100000 add_align 100000 Jan 20 23:51:49.456758 kernel: pci 0000:00:01.2: bridge window [io 0x1000-0x0fff] to [bus 03] add_size 1000 Jan 20 23:51:49.456863 kernel: pci 0000:00:01.2: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 03] add_size 100000 add_align 100000 Jan 20 23:51:49.456953 kernel: pci 0000:00:01.2: bridge window [mem 0x00100000-0x001fffff] to [bus 03] add_size 100000 add_align 100000 Jan 20 23:51:49.457043 kernel: pci 0000:00:01.3: bridge window [io 0x1000-0x0fff] to [bus 04] add_size 1000 Jan 20 23:51:49.457130 kernel: pci 0000:00:01.3: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 04] add_size 100000 add_align 100000 Jan 20 23:51:49.457213 kernel: pci 0000:00:01.3: bridge window [mem 0x00100000-0x000fffff] to [bus 04] add_size 200000 add_align 100000 Jan 20 23:51:49.457300 kernel: pci 0000:00:01.4: bridge window [io 0x1000-0x0fff] to [bus 05] add_size 1000 Jan 20 23:51:49.457383 kernel: pci 0000:00:01.4: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 05] add_size 100000 add_align 100000 Jan 20 23:51:49.457497 kernel: pci 0000:00:01.4: bridge window [mem 0x00100000-0x001fffff] to [bus 05] add_size 100000 add_align 100000 Jan 20 23:51:49.457598 kernel: pci 0000:00:01.5: bridge window [io 0x1000-0x0fff] to [bus 06] add_size 1000 Jan 20 23:51:49.457710 kernel: pci 0000:00:01.5: bridge window [mem 0x00100000-0x001fffff 64bit pref] to [bus 06] add_size 100000 add_align 100000 Jan 20 23:51:49.457802 kernel: pci 0000:00:01.5: bridge window [mem 0x00100000-0x001fffff] to [bus 06] add_size 100000 add_align 100000 Jan 20 23:51:49.457910 kernel: pci 0000:00:01.6: bridge window [io 0x1000-0x0fff] to [bus 07] add_size 1000 Jan 20 23:51:49.457996 kernel: pci 0000:00:01.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 07] add_size 200000 add_align 100000 Jan 20 23:51:49.458080 kernel: pci 0000:00:01.6: bridge window [mem 0x00100000-0x000fffff] to [bus 07] add_size 200000 add_align 100000 Jan 20 23:51:49.458167 kernel: pci 0000:00:01.7: bridge window [io 0x1000-0x0fff] to [bus 08] add_size 1000 Jan 20 23:51:49.458253 kernel: pci 0000:00:01.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 08] add_size 200000 add_align 100000 Jan 20 23:51:49.458334 kernel: pci 0000:00:01.7: bridge window [mem 0x00100000-0x000fffff] to [bus 08] add_size 200000 add_align 100000 Jan 20 23:51:49.458421 kernel: pci 0000:00:02.0: bridge window [io 0x1000-0x0fff] to [bus 09] add_size 1000 Jan 20 23:51:49.458526 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 09] add_size 200000 add_align 100000 Jan 20 23:51:49.458610 kernel: pci 0000:00:02.0: bridge window [mem 0x00100000-0x000fffff] to [bus 09] add_size 200000 add_align 100000 Jan 20 23:51:49.458702 kernel: pci 0000:00:02.1: bridge window [io 0x1000-0x0fff] to [bus 0a] add_size 1000 Jan 20 23:51:49.458797 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0a] add_size 200000 add_align 100000 Jan 20 23:51:49.458879 kernel: pci 0000:00:02.1: bridge window [mem 0x00100000-0x000fffff] to [bus 0a] add_size 200000 add_align 100000 Jan 20 23:51:49.458965 kernel: pci 0000:00:02.2: bridge window [io 0x1000-0x0fff] to [bus 0b] add_size 1000 Jan 20 23:51:49.459048 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0b] add_size 200000 add_align 100000 Jan 20 23:51:49.459138 kernel: pci 0000:00:02.2: bridge window [mem 0x00100000-0x000fffff] to [bus 0b] add_size 200000 add_align 100000 Jan 20 23:51:49.459232 kernel: pci 0000:00:02.3: bridge window [io 0x1000-0x0fff] to [bus 0c] add_size 1000 Jan 20 23:51:49.459314 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0c] add_size 200000 add_align 100000 Jan 20 23:51:49.459395 kernel: pci 0000:00:02.3: bridge window [mem 0x00100000-0x000fffff] to [bus 0c] add_size 200000 add_align 100000 Jan 20 23:51:49.459500 kernel: pci 0000:00:02.4: bridge window [io 0x1000-0x0fff] to [bus 0d] add_size 1000 Jan 20 23:51:49.459586 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0d] add_size 200000 add_align 100000 Jan 20 23:51:49.459667 kernel: pci 0000:00:02.4: bridge window [mem 0x00100000-0x000fffff] to [bus 0d] add_size 200000 add_align 100000 Jan 20 23:51:49.459777 kernel: pci 0000:00:02.5: bridge window [io 0x1000-0x0fff] to [bus 0e] add_size 1000 Jan 20 23:51:49.459860 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0e] add_size 200000 add_align 100000 Jan 20 23:51:49.459942 kernel: pci 0000:00:02.5: bridge window [mem 0x00100000-0x000fffff] to [bus 0e] add_size 200000 add_align 100000 Jan 20 23:51:49.460033 kernel: pci 0000:00:02.6: bridge window [io 0x1000-0x0fff] to [bus 0f] add_size 1000 Jan 20 23:51:49.460120 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 0f] add_size 200000 add_align 100000 Jan 20 23:51:49.460210 kernel: pci 0000:00:02.6: bridge window [mem 0x00100000-0x000fffff] to [bus 0f] add_size 200000 add_align 100000 Jan 20 23:51:49.460297 kernel: pci 0000:00:02.7: bridge window [io 0x1000-0x0fff] to [bus 10] add_size 1000 Jan 20 23:51:49.460381 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 10] add_size 200000 add_align 100000 Jan 20 23:51:49.460478 kernel: pci 0000:00:02.7: bridge window [mem 0x00100000-0x000fffff] to [bus 10] add_size 200000 add_align 100000 Jan 20 23:51:49.460568 kernel: pci 0000:00:03.0: bridge window [io 0x1000-0x0fff] to [bus 11] add_size 1000 Jan 20 23:51:49.460654 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 11] add_size 200000 add_align 100000 Jan 20 23:51:49.460740 kernel: pci 0000:00:03.0: bridge window [mem 0x00100000-0x000fffff] to [bus 11] add_size 200000 add_align 100000 Jan 20 23:51:49.460846 kernel: pci 0000:00:03.1: bridge window [io 0x1000-0x0fff] to [bus 12] add_size 1000 Jan 20 23:51:49.460938 kernel: pci 0000:00:03.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 12] add_size 200000 add_align 100000 Jan 20 23:51:49.461022 kernel: pci 0000:00:03.1: bridge window [mem 0x00100000-0x000fffff] to [bus 12] add_size 200000 add_align 100000 Jan 20 23:51:49.461110 kernel: pci 0000:00:03.2: bridge window [io 0x1000-0x0fff] to [bus 13] add_size 1000 Jan 20 23:51:49.461197 kernel: pci 0000:00:03.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 13] add_size 200000 add_align 100000 Jan 20 23:51:49.461282 kernel: pci 0000:00:03.2: bridge window [mem 0x00100000-0x000fffff] to [bus 13] add_size 200000 add_align 100000 Jan 20 23:51:49.461368 kernel: pci 0000:00:03.3: bridge window [io 0x1000-0x0fff] to [bus 14] add_size 1000 Jan 20 23:51:49.461451 kernel: pci 0000:00:03.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 14] add_size 200000 add_align 100000 Jan 20 23:51:49.461559 kernel: pci 0000:00:03.3: bridge window [mem 0x00100000-0x000fffff] to [bus 14] add_size 200000 add_align 100000 Jan 20 23:51:49.461652 kernel: pci 0000:00:03.4: bridge window [io 0x1000-0x0fff] to [bus 15] add_size 1000 Jan 20 23:51:49.461740 kernel: pci 0000:00:03.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 15] add_size 200000 add_align 100000 Jan 20 23:51:49.461826 kernel: pci 0000:00:03.4: bridge window [mem 0x00100000-0x000fffff] to [bus 15] add_size 200000 add_align 100000 Jan 20 23:51:49.461935 kernel: pci 0000:00:03.5: bridge window [io 0x1000-0x0fff] to [bus 16] add_size 1000 Jan 20 23:51:49.462019 kernel: pci 0000:00:03.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 16] add_size 200000 add_align 100000 Jan 20 23:51:49.462104 kernel: pci 0000:00:03.5: bridge window [mem 0x00100000-0x000fffff] to [bus 16] add_size 200000 add_align 100000 Jan 20 23:51:49.462189 kernel: pci 0000:00:03.6: bridge window [io 0x1000-0x0fff] to [bus 17] add_size 1000 Jan 20 23:51:49.462274 kernel: pci 0000:00:03.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 17] add_size 200000 add_align 100000 Jan 20 23:51:49.462356 kernel: pci 0000:00:03.6: bridge window [mem 0x00100000-0x000fffff] to [bus 17] add_size 200000 add_align 100000 Jan 20 23:51:49.462446 kernel: pci 0000:00:03.7: bridge window [io 0x1000-0x0fff] to [bus 18] add_size 1000 Jan 20 23:51:49.462550 kernel: pci 0000:00:03.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 18] add_size 200000 add_align 100000 Jan 20 23:51:49.462634 kernel: pci 0000:00:03.7: bridge window [mem 0x00100000-0x000fffff] to [bus 18] add_size 200000 add_align 100000 Jan 20 23:51:49.462724 kernel: pci 0000:00:04.0: bridge window [io 0x1000-0x0fff] to [bus 19] add_size 1000 Jan 20 23:51:49.462807 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 19] add_size 200000 add_align 100000 Jan 20 23:51:49.462889 kernel: pci 0000:00:04.0: bridge window [mem 0x00100000-0x000fffff] to [bus 19] add_size 200000 add_align 100000 Jan 20 23:51:49.462986 kernel: pci 0000:00:04.1: bridge window [io 0x1000-0x0fff] to [bus 1a] add_size 1000 Jan 20 23:51:49.463073 kernel: pci 0000:00:04.1: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1a] add_size 200000 add_align 100000 Jan 20 23:51:49.463155 kernel: pci 0000:00:04.1: bridge window [mem 0x00100000-0x000fffff] to [bus 1a] add_size 200000 add_align 100000 Jan 20 23:51:49.463242 kernel: pci 0000:00:04.2: bridge window [io 0x1000-0x0fff] to [bus 1b] add_size 1000 Jan 20 23:51:49.463326 kernel: pci 0000:00:04.2: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1b] add_size 200000 add_align 100000 Jan 20 23:51:49.463407 kernel: pci 0000:00:04.2: bridge window [mem 0x00100000-0x000fffff] to [bus 1b] add_size 200000 add_align 100000 Jan 20 23:51:49.463518 kernel: pci 0000:00:04.3: bridge window [io 0x1000-0x0fff] to [bus 1c] add_size 1000 Jan 20 23:51:49.463619 kernel: pci 0000:00:04.3: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1c] add_size 200000 add_align 100000 Jan 20 23:51:49.463707 kernel: pci 0000:00:04.3: bridge window [mem 0x00100000-0x000fffff] to [bus 1c] add_size 200000 add_align 100000 Jan 20 23:51:49.463799 kernel: pci 0000:00:04.4: bridge window [io 0x1000-0x0fff] to [bus 1d] add_size 1000 Jan 20 23:51:49.463883 kernel: pci 0000:00:04.4: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1d] add_size 200000 add_align 100000 Jan 20 23:51:49.463965 kernel: pci 0000:00:04.4: bridge window [mem 0x00100000-0x000fffff] to [bus 1d] add_size 200000 add_align 100000 Jan 20 23:51:49.464050 kernel: pci 0000:00:04.5: bridge window [io 0x1000-0x0fff] to [bus 1e] add_size 1000 Jan 20 23:51:49.464133 kernel: pci 0000:00:04.5: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1e] add_size 200000 add_align 100000 Jan 20 23:51:49.464217 kernel: pci 0000:00:04.5: bridge window [mem 0x00100000-0x000fffff] to [bus 1e] add_size 200000 add_align 100000 Jan 20 23:51:49.464305 kernel: pci 0000:00:04.6: bridge window [io 0x1000-0x0fff] to [bus 1f] add_size 1000 Jan 20 23:51:49.464387 kernel: pci 0000:00:04.6: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 1f] add_size 200000 add_align 100000 Jan 20 23:51:49.464483 kernel: pci 0000:00:04.6: bridge window [mem 0x00100000-0x000fffff] to [bus 1f] add_size 200000 add_align 100000 Jan 20 23:51:49.464575 kernel: pci 0000:00:04.7: bridge window [io 0x1000-0x0fff] to [bus 20] add_size 1000 Jan 20 23:51:49.464659 kernel: pci 0000:00:04.7: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 20] add_size 200000 add_align 100000 Jan 20 23:51:49.464743 kernel: pci 0000:00:04.7: bridge window [mem 0x00100000-0x000fffff] to [bus 20] add_size 200000 add_align 100000 Jan 20 23:51:49.464847 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x0fff] to [bus 21] add_size 1000 Jan 20 23:51:49.464937 kernel: pci 0000:00:05.0: bridge window [mem 0x00100000-0x000fffff 64bit pref] to [bus 21] add_size 200000 add_align 100000 Jan 20 23:51:49.465020 kernel: pci 0000:00:05.0: bridge window [mem 0x00100000-0x000fffff] to [bus 21] add_size 200000 add_align 100000 Jan 20 23:51:49.465105 kernel: pci 0000:00:01.0: bridge window [mem 0x10000000-0x101fffff]: assigned Jan 20 23:51:49.465191 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref]: assigned Jan 20 23:51:49.465276 kernel: pci 0000:00:01.1: bridge window [mem 0x10200000-0x103fffff]: assigned Jan 20 23:51:49.465357 kernel: pci 0000:00:01.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref]: assigned Jan 20 23:51:49.465442 kernel: pci 0000:00:01.2: bridge window [mem 0x10400000-0x105fffff]: assigned Jan 20 23:51:49.465540 kernel: pci 0000:00:01.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref]: assigned Jan 20 23:51:49.465628 kernel: pci 0000:00:01.3: bridge window [mem 0x10600000-0x107fffff]: assigned Jan 20 23:51:49.465710 kernel: pci 0000:00:01.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref]: assigned Jan 20 23:51:49.465797 kernel: pci 0000:00:01.4: bridge window [mem 0x10800000-0x109fffff]: assigned Jan 20 23:51:49.465880 kernel: pci 0000:00:01.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref]: assigned Jan 20 23:51:49.465962 kernel: pci 0000:00:01.5: bridge window [mem 0x10a00000-0x10bfffff]: assigned Jan 20 23:51:49.466043 kernel: pci 0000:00:01.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref]: assigned Jan 20 23:51:49.466128 kernel: pci 0000:00:01.6: bridge window [mem 0x10c00000-0x10dfffff]: assigned Jan 20 23:51:49.466208 kernel: pci 0000:00:01.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref]: assigned Jan 20 23:51:49.466295 kernel: pci 0000:00:01.7: bridge window [mem 0x10e00000-0x10ffffff]: assigned Jan 20 23:51:49.466377 kernel: pci 0000:00:01.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref]: assigned Jan 20 23:51:49.466472 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff]: assigned Jan 20 23:51:49.466557 kernel: pci 0000:00:02.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref]: assigned Jan 20 23:51:49.466645 kernel: pci 0000:00:02.1: bridge window [mem 0x11200000-0x113fffff]: assigned Jan 20 23:51:49.466727 kernel: pci 0000:00:02.1: bridge window [mem 0x8001200000-0x80013fffff 64bit pref]: assigned Jan 20 23:51:49.466812 kernel: pci 0000:00:02.2: bridge window [mem 0x11400000-0x115fffff]: assigned Jan 20 23:51:49.466895 kernel: pci 0000:00:02.2: bridge window [mem 0x8001400000-0x80015fffff 64bit pref]: assigned Jan 20 23:51:49.466979 kernel: pci 0000:00:02.3: bridge window [mem 0x11600000-0x117fffff]: assigned Jan 20 23:51:49.467060 kernel: pci 0000:00:02.3: bridge window [mem 0x8001600000-0x80017fffff 64bit pref]: assigned Jan 20 23:51:49.467143 kernel: pci 0000:00:02.4: bridge window [mem 0x11800000-0x119fffff]: assigned Jan 20 23:51:49.467225 kernel: pci 0000:00:02.4: bridge window [mem 0x8001800000-0x80019fffff 64bit pref]: assigned Jan 20 23:51:49.467308 kernel: pci 0000:00:02.5: bridge window [mem 0x11a00000-0x11bfffff]: assigned Jan 20 23:51:49.467397 kernel: pci 0000:00:02.5: bridge window [mem 0x8001a00000-0x8001bfffff 64bit pref]: assigned Jan 20 23:51:49.467492 kernel: pci 0000:00:02.6: bridge window [mem 0x11c00000-0x11dfffff]: assigned Jan 20 23:51:49.467577 kernel: pci 0000:00:02.6: bridge window [mem 0x8001c00000-0x8001dfffff 64bit pref]: assigned Jan 20 23:51:49.467664 kernel: pci 0000:00:02.7: bridge window [mem 0x11e00000-0x11ffffff]: assigned Jan 20 23:51:49.467747 kernel: pci 0000:00:02.7: bridge window [mem 0x8001e00000-0x8001ffffff 64bit pref]: assigned Jan 20 23:51:49.467833 kernel: pci 0000:00:03.0: bridge window [mem 0x12000000-0x121fffff]: assigned Jan 20 23:51:49.467920 kernel: pci 0000:00:03.0: bridge window [mem 0x8002000000-0x80021fffff 64bit pref]: assigned Jan 20 23:51:49.468004 kernel: pci 0000:00:03.1: bridge window [mem 0x12200000-0x123fffff]: assigned Jan 20 23:51:49.468086 kernel: pci 0000:00:03.1: bridge window [mem 0x8002200000-0x80023fffff 64bit pref]: assigned Jan 20 23:51:49.468171 kernel: pci 0000:00:03.2: bridge window [mem 0x12400000-0x125fffff]: assigned Jan 20 23:51:49.468255 kernel: pci 0000:00:03.2: bridge window [mem 0x8002400000-0x80025fffff 64bit pref]: assigned Jan 20 23:51:49.468338 kernel: pci 0000:00:03.3: bridge window [mem 0x12600000-0x127fffff]: assigned Jan 20 23:51:49.468421 kernel: pci 0000:00:03.3: bridge window [mem 0x8002600000-0x80027fffff 64bit pref]: assigned Jan 20 23:51:49.468525 kernel: pci 0000:00:03.4: bridge window [mem 0x12800000-0x129fffff]: assigned Jan 20 23:51:49.468610 kernel: pci 0000:00:03.4: bridge window [mem 0x8002800000-0x80029fffff 64bit pref]: assigned Jan 20 23:51:49.468695 kernel: pci 0000:00:03.5: bridge window [mem 0x12a00000-0x12bfffff]: assigned Jan 20 23:51:49.468777 kernel: pci 0000:00:03.5: bridge window [mem 0x8002a00000-0x8002bfffff 64bit pref]: assigned Jan 20 23:51:49.468879 kernel: pci 0000:00:03.6: bridge window [mem 0x12c00000-0x12dfffff]: assigned Jan 20 23:51:49.468964 kernel: pci 0000:00:03.6: bridge window [mem 0x8002c00000-0x8002dfffff 64bit pref]: assigned Jan 20 23:51:49.469052 kernel: pci 0000:00:03.7: bridge window [mem 0x12e00000-0x12ffffff]: assigned Jan 20 23:51:49.469135 kernel: pci 0000:00:03.7: bridge window [mem 0x8002e00000-0x8002ffffff 64bit pref]: assigned Jan 20 23:51:49.469219 kernel: pci 0000:00:04.0: bridge window [mem 0x13000000-0x131fffff]: assigned Jan 20 23:51:49.469301 kernel: pci 0000:00:04.0: bridge window [mem 0x8003000000-0x80031fffff 64bit pref]: assigned Jan 20 23:51:49.469386 kernel: pci 0000:00:04.1: bridge window [mem 0x13200000-0x133fffff]: assigned Jan 20 23:51:49.469482 kernel: pci 0000:00:04.1: bridge window [mem 0x8003200000-0x80033fffff 64bit pref]: assigned Jan 20 23:51:49.469573 kernel: pci 0000:00:04.2: bridge window [mem 0x13400000-0x135fffff]: assigned Jan 20 23:51:49.469656 kernel: pci 0000:00:04.2: bridge window [mem 0x8003400000-0x80035fffff 64bit pref]: assigned Jan 20 23:51:49.469742 kernel: pci 0000:00:04.3: bridge window [mem 0x13600000-0x137fffff]: assigned Jan 20 23:51:49.469823 kernel: pci 0000:00:04.3: bridge window [mem 0x8003600000-0x80037fffff 64bit pref]: assigned Jan 20 23:51:49.469907 kernel: pci 0000:00:04.4: bridge window [mem 0x13800000-0x139fffff]: assigned Jan 20 23:51:49.469989 kernel: pci 0000:00:04.4: bridge window [mem 0x8003800000-0x80039fffff 64bit pref]: assigned Jan 20 23:51:49.470075 kernel: pci 0000:00:04.5: bridge window [mem 0x13a00000-0x13bfffff]: assigned Jan 20 23:51:49.470157 kernel: pci 0000:00:04.5: bridge window [mem 0x8003a00000-0x8003bfffff 64bit pref]: assigned Jan 20 23:51:49.470240 kernel: pci 0000:00:04.6: bridge window [mem 0x13c00000-0x13dfffff]: assigned Jan 20 23:51:49.470322 kernel: pci 0000:00:04.6: bridge window [mem 0x8003c00000-0x8003dfffff 64bit pref]: assigned Jan 20 23:51:49.470406 kernel: pci 0000:00:04.7: bridge window [mem 0x13e00000-0x13ffffff]: assigned Jan 20 23:51:49.470501 kernel: pci 0000:00:04.7: bridge window [mem 0x8003e00000-0x8003ffffff 64bit pref]: assigned Jan 20 23:51:49.470587 kernel: pci 0000:00:05.0: bridge window [mem 0x14000000-0x141fffff]: assigned Jan 20 23:51:49.470672 kernel: pci 0000:00:05.0: bridge window [mem 0x8004000000-0x80041fffff 64bit pref]: assigned Jan 20 23:51:49.470756 kernel: pci 0000:00:01.0: BAR 0 [mem 0x14200000-0x14200fff]: assigned Jan 20 23:51:49.470838 kernel: pci 0000:00:01.0: bridge window [io 0x1000-0x1fff]: assigned Jan 20 23:51:49.470922 kernel: pci 0000:00:01.1: BAR 0 [mem 0x14201000-0x14201fff]: assigned Jan 20 23:51:49.471006 kernel: pci 0000:00:01.1: bridge window [io 0x2000-0x2fff]: assigned Jan 20 23:51:49.471089 kernel: pci 0000:00:01.2: BAR 0 [mem 0x14202000-0x14202fff]: assigned Jan 20 23:51:49.471173 kernel: pci 0000:00:01.2: bridge window [io 0x3000-0x3fff]: assigned Jan 20 23:51:49.471257 kernel: pci 0000:00:01.3: BAR 0 [mem 0x14203000-0x14203fff]: assigned Jan 20 23:51:49.471338 kernel: pci 0000:00:01.3: bridge window [io 0x4000-0x4fff]: assigned Jan 20 23:51:49.471421 kernel: pci 0000:00:01.4: BAR 0 [mem 0x14204000-0x14204fff]: assigned Jan 20 23:51:49.471517 kernel: pci 0000:00:01.4: bridge window [io 0x5000-0x5fff]: assigned Jan 20 23:51:49.471603 kernel: pci 0000:00:01.5: BAR 0 [mem 0x14205000-0x14205fff]: assigned Jan 20 23:51:49.471687 kernel: pci 0000:00:01.5: bridge window [io 0x6000-0x6fff]: assigned Jan 20 23:51:49.471774 kernel: pci 0000:00:01.6: BAR 0 [mem 0x14206000-0x14206fff]: assigned Jan 20 23:51:49.471856 kernel: pci 0000:00:01.6: bridge window [io 0x7000-0x7fff]: assigned Jan 20 23:51:49.471940 kernel: pci 0000:00:01.7: BAR 0 [mem 0x14207000-0x14207fff]: assigned Jan 20 23:51:49.472022 kernel: pci 0000:00:01.7: bridge window [io 0x8000-0x8fff]: assigned Jan 20 23:51:49.472106 kernel: pci 0000:00:02.0: BAR 0 [mem 0x14208000-0x14208fff]: assigned Jan 20 23:51:49.472189 kernel: pci 0000:00:02.0: bridge window [io 0x9000-0x9fff]: assigned Jan 20 23:51:49.472275 kernel: pci 0000:00:02.1: BAR 0 [mem 0x14209000-0x14209fff]: assigned Jan 20 23:51:49.472357 kernel: pci 0000:00:02.1: bridge window [io 0xa000-0xafff]: assigned Jan 20 23:51:49.472442 kernel: pci 0000:00:02.2: BAR 0 [mem 0x1420a000-0x1420afff]: assigned Jan 20 23:51:49.472548 kernel: pci 0000:00:02.2: bridge window [io 0xb000-0xbfff]: assigned Jan 20 23:51:49.472641 kernel: pci 0000:00:02.3: BAR 0 [mem 0x1420b000-0x1420bfff]: assigned Jan 20 23:51:49.472723 kernel: pci 0000:00:02.3: bridge window [io 0xc000-0xcfff]: assigned Jan 20 23:51:49.472817 kernel: pci 0000:00:02.4: BAR 0 [mem 0x1420c000-0x1420cfff]: assigned Jan 20 23:51:49.472917 kernel: pci 0000:00:02.4: bridge window [io 0xd000-0xdfff]: assigned Jan 20 23:51:49.473005 kernel: pci 0000:00:02.5: BAR 0 [mem 0x1420d000-0x1420dfff]: assigned Jan 20 23:51:49.473089 kernel: pci 0000:00:02.5: bridge window [io 0xe000-0xefff]: assigned Jan 20 23:51:49.473173 kernel: pci 0000:00:02.6: BAR 0 [mem 0x1420e000-0x1420efff]: assigned Jan 20 23:51:49.473259 kernel: pci 0000:00:02.6: bridge window [io 0xf000-0xffff]: assigned Jan 20 23:51:49.473344 kernel: pci 0000:00:02.7: BAR 0 [mem 0x1420f000-0x1420ffff]: assigned Jan 20 23:51:49.473426 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.473529 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.473616 kernel: pci 0000:00:03.0: BAR 0 [mem 0x14210000-0x14210fff]: assigned Jan 20 23:51:49.473699 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.473784 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.473869 kernel: pci 0000:00:03.1: BAR 0 [mem 0x14211000-0x14211fff]: assigned Jan 20 23:51:49.473950 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.474031 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.474115 kernel: pci 0000:00:03.2: BAR 0 [mem 0x14212000-0x14212fff]: assigned Jan 20 23:51:49.474198 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.474282 kernel: pci 0000:00:03.2: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.474367 kernel: pci 0000:00:03.3: BAR 0 [mem 0x14213000-0x14213fff]: assigned Jan 20 23:51:49.474449 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.474564 kernel: pci 0000:00:03.3: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.474654 kernel: pci 0000:00:03.4: BAR 0 [mem 0x14214000-0x14214fff]: assigned Jan 20 23:51:49.474736 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.474817 kernel: pci 0000:00:03.4: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.474905 kernel: pci 0000:00:03.5: BAR 0 [mem 0x14215000-0x14215fff]: assigned Jan 20 23:51:49.474986 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.475067 kernel: pci 0000:00:03.5: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.475150 kernel: pci 0000:00:03.6: BAR 0 [mem 0x14216000-0x14216fff]: assigned Jan 20 23:51:49.475232 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.475313 kernel: pci 0000:00:03.6: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.475399 kernel: pci 0000:00:03.7: BAR 0 [mem 0x14217000-0x14217fff]: assigned Jan 20 23:51:49.475503 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.475593 kernel: pci 0000:00:03.7: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.475679 kernel: pci 0000:00:04.0: BAR 0 [mem 0x14218000-0x14218fff]: assigned Jan 20 23:51:49.475763 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.475847 kernel: pci 0000:00:04.0: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.475933 kernel: pci 0000:00:04.1: BAR 0 [mem 0x14219000-0x14219fff]: assigned Jan 20 23:51:49.476017 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.476110 kernel: pci 0000:00:04.1: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.476199 kernel: pci 0000:00:04.2: BAR 0 [mem 0x1421a000-0x1421afff]: assigned Jan 20 23:51:49.476282 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.476364 kernel: pci 0000:00:04.2: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.476448 kernel: pci 0000:00:04.3: BAR 0 [mem 0x1421b000-0x1421bfff]: assigned Jan 20 23:51:49.476596 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.476683 kernel: pci 0000:00:04.3: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.476766 kernel: pci 0000:00:04.4: BAR 0 [mem 0x1421c000-0x1421cfff]: assigned Jan 20 23:51:49.476866 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.476952 kernel: pci 0000:00:04.4: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.477038 kernel: pci 0000:00:04.5: BAR 0 [mem 0x1421d000-0x1421dfff]: assigned Jan 20 23:51:49.477124 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.477207 kernel: pci 0000:00:04.5: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.477293 kernel: pci 0000:00:04.6: BAR 0 [mem 0x1421e000-0x1421efff]: assigned Jan 20 23:51:49.477375 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.477474 kernel: pci 0000:00:04.6: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.477570 kernel: pci 0000:00:04.7: BAR 0 [mem 0x1421f000-0x1421ffff]: assigned Jan 20 23:51:49.477655 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.477743 kernel: pci 0000:00:04.7: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.477828 kernel: pci 0000:00:05.0: BAR 0 [mem 0x14220000-0x14220fff]: assigned Jan 20 23:51:49.477914 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.477998 kernel: pci 0000:00:05.0: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.478083 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x1fff]: assigned Jan 20 23:51:49.478167 kernel: pci 0000:00:04.7: bridge window [io 0x2000-0x2fff]: assigned Jan 20 23:51:49.478254 kernel: pci 0000:00:04.6: bridge window [io 0x3000-0x3fff]: assigned Jan 20 23:51:49.478341 kernel: pci 0000:00:04.5: bridge window [io 0x4000-0x4fff]: assigned Jan 20 23:51:49.478424 kernel: pci 0000:00:04.4: bridge window [io 0x5000-0x5fff]: assigned Jan 20 23:51:49.478530 kernel: pci 0000:00:04.3: bridge window [io 0x6000-0x6fff]: assigned Jan 20 23:51:49.478620 kernel: pci 0000:00:04.2: bridge window [io 0x7000-0x7fff]: assigned Jan 20 23:51:49.478708 kernel: pci 0000:00:04.1: bridge window [io 0x8000-0x8fff]: assigned Jan 20 23:51:49.478796 kernel: pci 0000:00:04.0: bridge window [io 0x9000-0x9fff]: assigned Jan 20 23:51:49.478905 kernel: pci 0000:00:03.7: bridge window [io 0xa000-0xafff]: assigned Jan 20 23:51:49.478990 kernel: pci 0000:00:03.6: bridge window [io 0xb000-0xbfff]: assigned Jan 20 23:51:49.479076 kernel: pci 0000:00:03.5: bridge window [io 0xc000-0xcfff]: assigned Jan 20 23:51:49.479161 kernel: pci 0000:00:03.4: bridge window [io 0xd000-0xdfff]: assigned Jan 20 23:51:49.479246 kernel: pci 0000:00:03.3: bridge window [io 0xe000-0xefff]: assigned Jan 20 23:51:49.479331 kernel: pci 0000:00:03.2: bridge window [io 0xf000-0xffff]: assigned Jan 20 23:51:49.479416 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.479511 kernel: pci 0000:00:03.1: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.479599 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.479700 kernel: pci 0000:00:03.0: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.479795 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.479878 kernel: pci 0000:00:02.7: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.479968 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.480051 kernel: pci 0000:00:02.6: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.480136 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.480223 kernel: pci 0000:00:02.5: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.480308 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.480393 kernel: pci 0000:00:02.4: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.480491 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.480579 kernel: pci 0000:00:02.3: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.480665 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.480753 kernel: pci 0000:00:02.2: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.480856 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.480946 kernel: pci 0000:00:02.1: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.481033 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.481118 kernel: pci 0000:00:02.0: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.481204 kernel: pci 0000:00:01.7: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.481293 kernel: pci 0000:00:01.7: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.481380 kernel: pci 0000:00:01.6: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.481480 kernel: pci 0000:00:01.6: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.481573 kernel: pci 0000:00:01.5: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.481658 kernel: pci 0000:00:01.5: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.481744 kernel: pci 0000:00:01.4: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.481836 kernel: pci 0000:00:01.4: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.481926 kernel: pci 0000:00:01.3: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.482014 kernel: pci 0000:00:01.3: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.482125 kernel: pci 0000:00:01.2: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.482212 kernel: pci 0000:00:01.2: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.482304 kernel: pci 0000:00:01.1: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.482396 kernel: pci 0000:00:01.1: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.482501 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: can't assign; no space Jan 20 23:51:49.482591 kernel: pci 0000:00:01.0: bridge window [io size 0x1000]: failed to assign Jan 20 23:51:49.482683 kernel: pci 0000:01:00.0: ROM [mem 0x10000000-0x1007ffff pref]: assigned Jan 20 23:51:49.482769 kernel: pci 0000:01:00.0: BAR 4 [mem 0x8000000000-0x8000003fff 64bit pref]: assigned Jan 20 23:51:49.482857 kernel: pci 0000:01:00.0: BAR 1 [mem 0x10080000-0x10080fff]: assigned Jan 20 23:51:49.482940 kernel: pci 0000:00:01.0: PCI bridge to [bus 01] Jan 20 23:51:49.483023 kernel: pci 0000:00:01.0: bridge window [mem 0x10000000-0x101fffff] Jan 20 23:51:49.483108 kernel: pci 0000:00:01.0: bridge window [mem 0x8000000000-0x80001fffff 64bit pref] Jan 20 23:51:49.483208 kernel: pci 0000:02:00.0: BAR 0 [mem 0x10200000-0x10203fff 64bit]: assigned Jan 20 23:51:49.483295 kernel: pci 0000:00:01.1: PCI bridge to [bus 02] Jan 20 23:51:49.483382 kernel: pci 0000:00:01.1: bridge window [mem 0x10200000-0x103fffff] Jan 20 23:51:49.483477 kernel: pci 0000:00:01.1: bridge window [mem 0x8000200000-0x80003fffff 64bit pref] Jan 20 23:51:49.483571 kernel: pci 0000:03:00.0: BAR 4 [mem 0x8000400000-0x8000403fff 64bit pref]: assigned Jan 20 23:51:49.483656 kernel: pci 0000:03:00.0: BAR 1 [mem 0x10400000-0x10400fff]: assigned Jan 20 23:51:49.483740 kernel: pci 0000:00:01.2: PCI bridge to [bus 03] Jan 20 23:51:49.483840 kernel: pci 0000:00:01.2: bridge window [mem 0x10400000-0x105fffff] Jan 20 23:51:49.483926 kernel: pci 0000:00:01.2: bridge window [mem 0x8000400000-0x80005fffff 64bit pref] Jan 20 23:51:49.484016 kernel: pci 0000:04:00.0: BAR 4 [mem 0x8000600000-0x8000603fff 64bit pref]: assigned Jan 20 23:51:49.484101 kernel: pci 0000:00:01.3: PCI bridge to [bus 04] Jan 20 23:51:49.484185 kernel: pci 0000:00:01.3: bridge window [mem 0x10600000-0x107fffff] Jan 20 23:51:49.484269 kernel: pci 0000:00:01.3: bridge window [mem 0x8000600000-0x80007fffff 64bit pref] Jan 20 23:51:49.484360 kernel: pci 0000:05:00.0: BAR 4 [mem 0x8000800000-0x8000803fff 64bit pref]: assigned Jan 20 23:51:49.484448 kernel: pci 0000:05:00.0: BAR 1 [mem 0x10800000-0x10800fff]: assigned Jan 20 23:51:49.484558 kernel: pci 0000:00:01.4: PCI bridge to [bus 05] Jan 20 23:51:49.484643 kernel: pci 0000:00:01.4: bridge window [mem 0x10800000-0x109fffff] Jan 20 23:51:49.484727 kernel: pci 0000:00:01.4: bridge window [mem 0x8000800000-0x80009fffff 64bit pref] Jan 20 23:51:49.484835 kernel: pci 0000:06:00.0: BAR 4 [mem 0x8000a00000-0x8000a03fff 64bit pref]: assigned Jan 20 23:51:49.484930 kernel: pci 0000:06:00.0: BAR 1 [mem 0x10a00000-0x10a00fff]: assigned Jan 20 23:51:49.485017 kernel: pci 0000:00:01.5: PCI bridge to [bus 06] Jan 20 23:51:49.485101 kernel: pci 0000:00:01.5: bridge window [mem 0x10a00000-0x10bfffff] Jan 20 23:51:49.485183 kernel: pci 0000:00:01.5: bridge window [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 20 23:51:49.485265 kernel: pci 0000:00:01.6: PCI bridge to [bus 07] Jan 20 23:51:49.485348 kernel: pci 0000:00:01.6: bridge window [mem 0x10c00000-0x10dfffff] Jan 20 23:51:49.485429 kernel: pci 0000:00:01.6: bridge window [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 20 23:51:49.485555 kernel: pci 0000:00:01.7: PCI bridge to [bus 08] Jan 20 23:51:49.485645 kernel: pci 0000:00:01.7: bridge window [mem 0x10e00000-0x10ffffff] Jan 20 23:51:49.485728 kernel: pci 0000:00:01.7: bridge window [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 20 23:51:49.485816 kernel: pci 0000:00:02.0: PCI bridge to [bus 09] Jan 20 23:51:49.485907 kernel: pci 0000:00:02.0: bridge window [mem 0x11000000-0x111fffff] Jan 20 23:51:49.485994 kernel: pci 0000:00:02.0: bridge window [mem 0x8001000000-0x80011fffff 64bit pref] Jan 20 23:51:49.486082 kernel: pci 0000:00:02.1: PCI bridge to [bus 0a] Jan 20 23:51:49.486164 kernel: pci 0000:00:02.1: bridge window [mem 0x11200000-0x113fffff] Jan 20 23:51:49.486248 kernel: pci 0000:00:02.1: bridge window [mem 0x8001200000-0x80013fffff 64bit pref] Jan 20 23:51:49.486333 kernel: pci 0000:00:02.2: PCI bridge to [bus 0b] Jan 20 23:51:49.486417 kernel: pci 0000:00:02.2: bridge window [mem 0x11400000-0x115fffff] Jan 20 23:51:49.486521 kernel: pci 0000:00:02.2: bridge window [mem 0x8001400000-0x80015fffff 64bit pref] Jan 20 23:51:49.486611 kernel: pci 0000:00:02.3: PCI bridge to [bus 0c] Jan 20 23:51:49.486697 kernel: pci 0000:00:02.3: bridge window [mem 0x11600000-0x117fffff] Jan 20 23:51:49.486782 kernel: pci 0000:00:02.3: bridge window [mem 0x8001600000-0x80017fffff 64bit pref] Jan 20 23:51:49.486866 kernel: pci 0000:00:02.4: PCI bridge to [bus 0d] Jan 20 23:51:49.486953 kernel: pci 0000:00:02.4: bridge window [mem 0x11800000-0x119fffff] Jan 20 23:51:49.487035 kernel: pci 0000:00:02.4: bridge window [mem 0x8001800000-0x80019fffff 64bit pref] Jan 20 23:51:49.487120 kernel: pci 0000:00:02.5: PCI bridge to [bus 0e] Jan 20 23:51:49.487204 kernel: pci 0000:00:02.5: bridge window [mem 0x11a00000-0x11bfffff] Jan 20 23:51:49.487288 kernel: pci 0000:00:02.5: bridge window [mem 0x8001a00000-0x8001bfffff 64bit pref] Jan 20 23:51:49.487373 kernel: pci 0000:00:02.6: PCI bridge to [bus 0f] Jan 20 23:51:49.487477 kernel: pci 0000:00:02.6: bridge window [mem 0x11c00000-0x11dfffff] Jan 20 23:51:49.487570 kernel: pci 0000:00:02.6: bridge window [mem 0x8001c00000-0x8001dfffff 64bit pref] Jan 20 23:51:49.487657 kernel: pci 0000:00:02.7: PCI bridge to [bus 10] Jan 20 23:51:49.487742 kernel: pci 0000:00:02.7: bridge window [mem 0x11e00000-0x11ffffff] Jan 20 23:51:49.487825 kernel: pci 0000:00:02.7: bridge window [mem 0x8001e00000-0x8001ffffff 64bit pref] Jan 20 23:51:49.487915 kernel: pci 0000:00:03.0: PCI bridge to [bus 11] Jan 20 23:51:49.487999 kernel: pci 0000:00:03.0: bridge window [mem 0x12000000-0x121fffff] Jan 20 23:51:49.488083 kernel: pci 0000:00:03.0: bridge window [mem 0x8002000000-0x80021fffff 64bit pref] Jan 20 23:51:49.488169 kernel: pci 0000:00:03.1: PCI bridge to [bus 12] Jan 20 23:51:49.488253 kernel: pci 0000:00:03.1: bridge window [mem 0x12200000-0x123fffff] Jan 20 23:51:49.488335 kernel: pci 0000:00:03.1: bridge window [mem 0x8002200000-0x80023fffff 64bit pref] Jan 20 23:51:49.488424 kernel: pci 0000:00:03.2: PCI bridge to [bus 13] Jan 20 23:51:49.488526 kernel: pci 0000:00:03.2: bridge window [io 0xf000-0xffff] Jan 20 23:51:49.488611 kernel: pci 0000:00:03.2: bridge window [mem 0x12400000-0x125fffff] Jan 20 23:51:49.488693 kernel: pci 0000:00:03.2: bridge window [mem 0x8002400000-0x80025fffff 64bit pref] Jan 20 23:51:49.488782 kernel: pci 0000:00:03.3: PCI bridge to [bus 14] Jan 20 23:51:49.488891 kernel: pci 0000:00:03.3: bridge window [io 0xe000-0xefff] Jan 20 23:51:49.488983 kernel: pci 0000:00:03.3: bridge window [mem 0x12600000-0x127fffff] Jan 20 23:51:49.489066 kernel: pci 0000:00:03.3: bridge window [mem 0x8002600000-0x80027fffff 64bit pref] Jan 20 23:51:49.489152 kernel: pci 0000:00:03.4: PCI bridge to [bus 15] Jan 20 23:51:49.489237 kernel: pci 0000:00:03.4: bridge window [io 0xd000-0xdfff] Jan 20 23:51:49.489322 kernel: pci 0000:00:03.4: bridge window [mem 0x12800000-0x129fffff] Jan 20 23:51:49.489405 kernel: pci 0000:00:03.4: bridge window [mem 0x8002800000-0x80029fffff 64bit pref] Jan 20 23:51:49.489509 kernel: pci 0000:00:03.5: PCI bridge to [bus 16] Jan 20 23:51:49.489604 kernel: pci 0000:00:03.5: bridge window [io 0xc000-0xcfff] Jan 20 23:51:49.489688 kernel: pci 0000:00:03.5: bridge window [mem 0x12a00000-0x12bfffff] Jan 20 23:51:49.489772 kernel: pci 0000:00:03.5: bridge window [mem 0x8002a00000-0x8002bfffff 64bit pref] Jan 20 23:51:49.489859 kernel: pci 0000:00:03.6: PCI bridge to [bus 17] Jan 20 23:51:49.489944 kernel: pci 0000:00:03.6: bridge window [io 0xb000-0xbfff] Jan 20 23:51:49.490028 kernel: pci 0000:00:03.6: bridge window [mem 0x12c00000-0x12dfffff] Jan 20 23:51:49.490111 kernel: pci 0000:00:03.6: bridge window [mem 0x8002c00000-0x8002dfffff 64bit pref] Jan 20 23:51:49.490199 kernel: pci 0000:00:03.7: PCI bridge to [bus 18] Jan 20 23:51:49.490283 kernel: pci 0000:00:03.7: bridge window [io 0xa000-0xafff] Jan 20 23:51:49.490365 kernel: pci 0000:00:03.7: bridge window [mem 0x12e00000-0x12ffffff] Jan 20 23:51:49.490448 kernel: pci 0000:00:03.7: bridge window [mem 0x8002e00000-0x8002ffffff 64bit pref] Jan 20 23:51:49.490552 kernel: pci 0000:00:04.0: PCI bridge to [bus 19] Jan 20 23:51:49.490637 kernel: pci 0000:00:04.0: bridge window [io 0x9000-0x9fff] Jan 20 23:51:49.490722 kernel: pci 0000:00:04.0: bridge window [mem 0x13000000-0x131fffff] Jan 20 23:51:49.490804 kernel: pci 0000:00:04.0: bridge window [mem 0x8003000000-0x80031fffff 64bit pref] Jan 20 23:51:49.490889 kernel: pci 0000:00:04.1: PCI bridge to [bus 1a] Jan 20 23:51:49.490972 kernel: pci 0000:00:04.1: bridge window [io 0x8000-0x8fff] Jan 20 23:51:49.491053 kernel: pci 0000:00:04.1: bridge window [mem 0x13200000-0x133fffff] Jan 20 23:51:49.491134 kernel: pci 0000:00:04.1: bridge window [mem 0x8003200000-0x80033fffff 64bit pref] Jan 20 23:51:49.491219 kernel: pci 0000:00:04.2: PCI bridge to [bus 1b] Jan 20 23:51:49.491304 kernel: pci 0000:00:04.2: bridge window [io 0x7000-0x7fff] Jan 20 23:51:49.491387 kernel: pci 0000:00:04.2: bridge window [mem 0x13400000-0x135fffff] Jan 20 23:51:49.491484 kernel: pci 0000:00:04.2: bridge window [mem 0x8003400000-0x80035fffff 64bit pref] Jan 20 23:51:49.491572 kernel: pci 0000:00:04.3: PCI bridge to [bus 1c] Jan 20 23:51:49.491655 kernel: pci 0000:00:04.3: bridge window [io 0x6000-0x6fff] Jan 20 23:51:49.491736 kernel: pci 0000:00:04.3: bridge window [mem 0x13600000-0x137fffff] Jan 20 23:51:49.491817 kernel: pci 0000:00:04.3: bridge window [mem 0x8003600000-0x80037fffff 64bit pref] Jan 20 23:51:49.491905 kernel: pci 0000:00:04.4: PCI bridge to [bus 1d] Jan 20 23:51:49.491987 kernel: pci 0000:00:04.4: bridge window [io 0x5000-0x5fff] Jan 20 23:51:49.492069 kernel: pci 0000:00:04.4: bridge window [mem 0x13800000-0x139fffff] Jan 20 23:51:49.492150 kernel: pci 0000:00:04.4: bridge window [mem 0x8003800000-0x80039fffff 64bit pref] Jan 20 23:51:49.492235 kernel: pci 0000:00:04.5: PCI bridge to [bus 1e] Jan 20 23:51:49.492317 kernel: pci 0000:00:04.5: bridge window [io 0x4000-0x4fff] Jan 20 23:51:49.492401 kernel: pci 0000:00:04.5: bridge window [mem 0x13a00000-0x13bfffff] Jan 20 23:51:49.492494 kernel: pci 0000:00:04.5: bridge window [mem 0x8003a00000-0x8003bfffff 64bit pref] Jan 20 23:51:49.492582 kernel: pci 0000:00:04.6: PCI bridge to [bus 1f] Jan 20 23:51:49.492666 kernel: pci 0000:00:04.6: bridge window [io 0x3000-0x3fff] Jan 20 23:51:49.492748 kernel: pci 0000:00:04.6: bridge window [mem 0x13c00000-0x13dfffff] Jan 20 23:51:49.492844 kernel: pci 0000:00:04.6: bridge window [mem 0x8003c00000-0x8003dfffff 64bit pref] Jan 20 23:51:49.492935 kernel: pci 0000:00:04.7: PCI bridge to [bus 20] Jan 20 23:51:49.493024 kernel: pci 0000:00:04.7: bridge window [io 0x2000-0x2fff] Jan 20 23:51:49.493107 kernel: pci 0000:00:04.7: bridge window [mem 0x13e00000-0x13ffffff] Jan 20 23:51:49.493192 kernel: pci 0000:00:04.7: bridge window [mem 0x8003e00000-0x8003ffffff 64bit pref] Jan 20 23:51:49.493277 kernel: pci 0000:00:05.0: PCI bridge to [bus 21] Jan 20 23:51:49.493359 kernel: pci 0000:00:05.0: bridge window [io 0x1000-0x1fff] Jan 20 23:51:49.493441 kernel: pci 0000:00:05.0: bridge window [mem 0x14000000-0x141fffff] Jan 20 23:51:49.493542 kernel: pci 0000:00:05.0: bridge window [mem 0x8004000000-0x80041fffff 64bit pref] Jan 20 23:51:49.493632 kernel: pci_bus 0000:00: resource 4 [mem 0x10000000-0x3efeffff window] Jan 20 23:51:49.493708 kernel: pci_bus 0000:00: resource 5 [io 0x0000-0xffff window] Jan 20 23:51:49.493782 kernel: pci_bus 0000:00: resource 6 [mem 0x8000000000-0xffffffffff window] Jan 20 23:51:49.493870 kernel: pci_bus 0000:01: resource 1 [mem 0x10000000-0x101fffff] Jan 20 23:51:49.493948 kernel: pci_bus 0000:01: resource 2 [mem 0x8000000000-0x80001fffff 64bit pref] Jan 20 23:51:49.494036 kernel: pci_bus 0000:02: resource 1 [mem 0x10200000-0x103fffff] Jan 20 23:51:49.494113 kernel: pci_bus 0000:02: resource 2 [mem 0x8000200000-0x80003fffff 64bit pref] Jan 20 23:51:49.494197 kernel: pci_bus 0000:03: resource 1 [mem 0x10400000-0x105fffff] Jan 20 23:51:49.494273 kernel: pci_bus 0000:03: resource 2 [mem 0x8000400000-0x80005fffff 64bit pref] Jan 20 23:51:49.494360 kernel: pci_bus 0000:04: resource 1 [mem 0x10600000-0x107fffff] Jan 20 23:51:49.494436 kernel: pci_bus 0000:04: resource 2 [mem 0x8000600000-0x80007fffff 64bit pref] Jan 20 23:51:49.494552 kernel: pci_bus 0000:05: resource 1 [mem 0x10800000-0x109fffff] Jan 20 23:51:49.494634 kernel: pci_bus 0000:05: resource 2 [mem 0x8000800000-0x80009fffff 64bit pref] Jan 20 23:51:49.494718 kernel: pci_bus 0000:06: resource 1 [mem 0x10a00000-0x10bfffff] Jan 20 23:51:49.494795 kernel: pci_bus 0000:06: resource 2 [mem 0x8000a00000-0x8000bfffff 64bit pref] Jan 20 23:51:49.494880 kernel: pci_bus 0000:07: resource 1 [mem 0x10c00000-0x10dfffff] Jan 20 23:51:49.494961 kernel: pci_bus 0000:07: resource 2 [mem 0x8000c00000-0x8000dfffff 64bit pref] Jan 20 23:51:49.495044 kernel: pci_bus 0000:08: resource 1 [mem 0x10e00000-0x10ffffff] Jan 20 23:51:49.495121 kernel: pci_bus 0000:08: resource 2 [mem 0x8000e00000-0x8000ffffff 64bit pref] Jan 20 23:51:49.495205 kernel: pci_bus 0000:09: resource 1 [mem 0x11000000-0x111fffff] Jan 20 23:51:49.495282 kernel: pci_bus 0000:09: resource 2 [mem 0x8001000000-0x80011fffff 64bit pref] Jan 20 23:51:49.495368 kernel: pci_bus 0000:0a: resource 1 [mem 0x11200000-0x113fffff] Jan 20 23:51:49.495445 kernel: pci_bus 0000:0a: resource 2 [mem 0x8001200000-0x80013fffff 64bit pref] Jan 20 23:51:49.495592 kernel: pci_bus 0000:0b: resource 1 [mem 0x11400000-0x115fffff] Jan 20 23:51:49.495679 kernel: pci_bus 0000:0b: resource 2 [mem 0x8001400000-0x80015fffff 64bit pref] Jan 20 23:51:49.495764 kernel: pci_bus 0000:0c: resource 1 [mem 0x11600000-0x117fffff] Jan 20 23:51:49.495845 kernel: pci_bus 0000:0c: resource 2 [mem 0x8001600000-0x80017fffff 64bit pref] Jan 20 23:51:49.495928 kernel: pci_bus 0000:0d: resource 1 [mem 0x11800000-0x119fffff] Jan 20 23:51:49.496005 kernel: pci_bus 0000:0d: resource 2 [mem 0x8001800000-0x80019fffff 64bit pref] Jan 20 23:51:49.496093 kernel: pci_bus 0000:0e: resource 1 [mem 0x11a00000-0x11bfffff] Jan 20 23:51:49.496170 kernel: pci_bus 0000:0e: resource 2 [mem 0x8001a00000-0x8001bfffff 64bit pref] Jan 20 23:51:49.496256 kernel: pci_bus 0000:0f: resource 1 [mem 0x11c00000-0x11dfffff] Jan 20 23:51:49.496332 kernel: pci_bus 0000:0f: resource 2 [mem 0x8001c00000-0x8001dfffff 64bit pref] Jan 20 23:51:49.496415 kernel: pci_bus 0000:10: resource 1 [mem 0x11e00000-0x11ffffff] Jan 20 23:51:49.496512 kernel: pci_bus 0000:10: resource 2 [mem 0x8001e00000-0x8001ffffff 64bit pref] Jan 20 23:51:49.496598 kernel: pci_bus 0000:11: resource 1 [mem 0x12000000-0x121fffff] Jan 20 23:51:49.496675 kernel: pci_bus 0000:11: resource 2 [mem 0x8002000000-0x80021fffff 64bit pref] Jan 20 23:51:49.496761 kernel: pci_bus 0000:12: resource 1 [mem 0x12200000-0x123fffff] Jan 20 23:51:49.496859 kernel: pci_bus 0000:12: resource 2 [mem 0x8002200000-0x80023fffff 64bit pref] Jan 20 23:51:49.496949 kernel: pci_bus 0000:13: resource 0 [io 0xf000-0xffff] Jan 20 23:51:49.497025 kernel: pci_bus 0000:13: resource 1 [mem 0x12400000-0x125fffff] Jan 20 23:51:49.497104 kernel: pci_bus 0000:13: resource 2 [mem 0x8002400000-0x80025fffff 64bit pref] Jan 20 23:51:49.497189 kernel: pci_bus 0000:14: resource 0 [io 0xe000-0xefff] Jan 20 23:51:49.497266 kernel: pci_bus 0000:14: resource 1 [mem 0x12600000-0x127fffff] Jan 20 23:51:49.497341 kernel: pci_bus 0000:14: resource 2 [mem 0x8002600000-0x80027fffff 64bit pref] Jan 20 23:51:49.497423 kernel: pci_bus 0000:15: resource 0 [io 0xd000-0xdfff] Jan 20 23:51:49.497526 kernel: pci_bus 0000:15: resource 1 [mem 0x12800000-0x129fffff] Jan 20 23:51:49.497609 kernel: pci_bus 0000:15: resource 2 [mem 0x8002800000-0x80029fffff 64bit pref] Jan 20 23:51:49.497698 kernel: pci_bus 0000:16: resource 0 [io 0xc000-0xcfff] Jan 20 23:51:49.497774 kernel: pci_bus 0000:16: resource 1 [mem 0x12a00000-0x12bfffff] Jan 20 23:51:49.497853 kernel: pci_bus 0000:16: resource 2 [mem 0x8002a00000-0x8002bfffff 64bit pref] Jan 20 23:51:49.497939 kernel: pci_bus 0000:17: resource 0 [io 0xb000-0xbfff] Jan 20 23:51:49.498019 kernel: pci_bus 0000:17: resource 1 [mem 0x12c00000-0x12dfffff] Jan 20 23:51:49.498103 kernel: pci_bus 0000:17: resource 2 [mem 0x8002c00000-0x8002dfffff 64bit pref] Jan 20 23:51:49.498189 kernel: pci_bus 0000:18: resource 0 [io 0xa000-0xafff] Jan 20 23:51:49.498265 kernel: pci_bus 0000:18: resource 1 [mem 0x12e00000-0x12ffffff] Jan 20 23:51:49.498341 kernel: pci_bus 0000:18: resource 2 [mem 0x8002e00000-0x8002ffffff 64bit pref] Jan 20 23:51:49.498424 kernel: pci_bus 0000:19: resource 0 [io 0x9000-0x9fff] Jan 20 23:51:49.498522 kernel: pci_bus 0000:19: resource 1 [mem 0x13000000-0x131fffff] Jan 20 23:51:49.498600 kernel: pci_bus 0000:19: resource 2 [mem 0x8003000000-0x80031fffff 64bit pref] Jan 20 23:51:49.498697 kernel: pci_bus 0000:1a: resource 0 [io 0x8000-0x8fff] Jan 20 23:51:49.498779 kernel: pci_bus 0000:1a: resource 1 [mem 0x13200000-0x133fffff] Jan 20 23:51:49.498856 kernel: pci_bus 0000:1a: resource 2 [mem 0x8003200000-0x80033fffff 64bit pref] Jan 20 23:51:49.498939 kernel: pci_bus 0000:1b: resource 0 [io 0x7000-0x7fff] Jan 20 23:51:49.499018 kernel: pci_bus 0000:1b: resource 1 [mem 0x13400000-0x135fffff] Jan 20 23:51:49.499093 kernel: pci_bus 0000:1b: resource 2 [mem 0x8003400000-0x80035fffff 64bit pref] Jan 20 23:51:49.499177 kernel: pci_bus 0000:1c: resource 0 [io 0x6000-0x6fff] Jan 20 23:51:49.499253 kernel: pci_bus 0000:1c: resource 1 [mem 0x13600000-0x137fffff] Jan 20 23:51:49.499330 kernel: pci_bus 0000:1c: resource 2 [mem 0x8003600000-0x80037fffff 64bit pref] Jan 20 23:51:49.499416 kernel: pci_bus 0000:1d: resource 0 [io 0x5000-0x5fff] Jan 20 23:51:49.499514 kernel: pci_bus 0000:1d: resource 1 [mem 0x13800000-0x139fffff] Jan 20 23:51:49.499596 kernel: pci_bus 0000:1d: resource 2 [mem 0x8003800000-0x80039fffff 64bit pref] Jan 20 23:51:49.499683 kernel: pci_bus 0000:1e: resource 0 [io 0x4000-0x4fff] Jan 20 23:51:49.499793 kernel: pci_bus 0000:1e: resource 1 [mem 0x13a00000-0x13bfffff] Jan 20 23:51:49.499874 kernel: pci_bus 0000:1e: resource 2 [mem 0x8003a00000-0x8003bfffff 64bit pref] Jan 20 23:51:49.499970 kernel: pci_bus 0000:1f: resource 0 [io 0x3000-0x3fff] Jan 20 23:51:49.500047 kernel: pci_bus 0000:1f: resource 1 [mem 0x13c00000-0x13dfffff] Jan 20 23:51:49.500124 kernel: pci_bus 0000:1f: resource 2 [mem 0x8003c00000-0x8003dfffff 64bit pref] Jan 20 23:51:49.500209 kernel: pci_bus 0000:20: resource 0 [io 0x2000-0x2fff] Jan 20 23:51:49.500287 kernel: pci_bus 0000:20: resource 1 [mem 0x13e00000-0x13ffffff] Jan 20 23:51:49.500364 kernel: pci_bus 0000:20: resource 2 [mem 0x8003e00000-0x8003ffffff 64bit pref] Jan 20 23:51:49.500450 kernel: pci_bus 0000:21: resource 0 [io 0x1000-0x1fff] Jan 20 23:51:49.500553 kernel: pci_bus 0000:21: resource 1 [mem 0x14000000-0x141fffff] Jan 20 23:51:49.500631 kernel: pci_bus 0000:21: resource 2 [mem 0x8004000000-0x80041fffff 64bit pref] Jan 20 23:51:49.500641 kernel: ACPI: PCI: Interrupt link GSI0 configured for IRQ 35 Jan 20 23:51:49.500649 kernel: ACPI: PCI: Interrupt link GSI1 configured for IRQ 36 Jan 20 23:51:49.500658 kernel: ACPI: PCI: Interrupt link GSI2 configured for IRQ 37 Jan 20 23:51:49.500669 kernel: ACPI: PCI: Interrupt link GSI3 configured for IRQ 38 Jan 20 23:51:49.500677 kernel: iommu: Default domain type: Translated Jan 20 23:51:49.500686 kernel: iommu: DMA domain TLB invalidation policy: strict mode Jan 20 23:51:49.500694 kernel: efivars: Registered efivars operations Jan 20 23:51:49.500702 kernel: vgaarb: loaded Jan 20 23:51:49.500710 kernel: clocksource: Switched to clocksource arch_sys_counter Jan 20 23:51:49.500718 kernel: VFS: Disk quotas dquot_6.6.0 Jan 20 23:51:49.500728 kernel: VFS: Dquot-cache hash table entries: 512 (order 0, 4096 bytes) Jan 20 23:51:49.500736 kernel: pnp: PnP ACPI init Jan 20 23:51:49.500851 kernel: system 00:00: [mem 0x4010000000-0x401fffffff window] could not be reserved Jan 20 23:51:49.500865 kernel: pnp: PnP ACPI: found 1 devices Jan 20 23:51:49.500874 kernel: NET: Registered PF_INET protocol family Jan 20 23:51:49.500883 kernel: IP idents hash table entries: 262144 (order: 9, 2097152 bytes, linear) Jan 20 23:51:49.500891 kernel: tcp_listen_portaddr_hash hash table entries: 8192 (order: 5, 131072 bytes, linear) Jan 20 23:51:49.500902 kernel: Table-perturb hash table entries: 65536 (order: 6, 262144 bytes, linear) Jan 20 23:51:49.500911 kernel: TCP established hash table entries: 131072 (order: 8, 1048576 bytes, linear) Jan 20 23:51:49.500920 kernel: TCP bind hash table entries: 65536 (order: 9, 2097152 bytes, linear) Jan 20 23:51:49.500928 kernel: TCP: Hash tables configured (established 131072 bind 65536) Jan 20 23:51:49.500937 kernel: UDP hash table entries: 8192 (order: 6, 262144 bytes, linear) Jan 20 23:51:49.500945 kernel: UDP-Lite hash table entries: 8192 (order: 6, 262144 bytes, linear) Jan 20 23:51:49.500953 kernel: NET: Registered PF_UNIX/PF_LOCAL protocol family Jan 20 23:51:49.501056 kernel: pci 0000:02:00.0: enabling device (0000 -> 0002) Jan 20 23:51:49.501069 kernel: PCI: CLS 0 bytes, default 64 Jan 20 23:51:49.501077 kernel: kvm [1]: HYP mode not available Jan 20 23:51:49.501086 kernel: Initialise system trusted keyrings Jan 20 23:51:49.501094 kernel: workingset: timestamp_bits=39 max_order=22 bucket_order=0 Jan 20 23:51:49.501102 kernel: Key type asymmetric registered Jan 20 23:51:49.501110 kernel: Asymmetric key parser 'x509' registered Jan 20 23:51:49.501121 kernel: Block layer SCSI generic (bsg) driver version 0.4 loaded (major 249) Jan 20 23:51:49.501129 kernel: io scheduler mq-deadline registered Jan 20 23:51:49.501137 kernel: io scheduler kyber registered Jan 20 23:51:49.501161 kernel: io scheduler bfq registered Jan 20 23:51:49.501170 kernel: ACPI: \_SB_.PCI0.GSI1: Enabled at IRQ 36 Jan 20 23:51:49.501259 kernel: pcieport 0000:00:01.0: PME: Signaling with IRQ 50 Jan 20 23:51:49.501347 kernel: pcieport 0000:00:01.0: AER: enabled with IRQ 50 Jan 20 23:51:49.501431 kernel: pcieport 0000:00:01.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:51:49.501557 kernel: pcieport 0000:00:01.1: PME: Signaling with IRQ 51 Jan 20 23:51:49.501660 kernel: pcieport 0000:00:01.1: AER: enabled with IRQ 51 Jan 20 23:51:49.501747 kernel: pcieport 0000:00:01.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:51:49.501833 kernel: pcieport 0000:00:01.2: PME: Signaling with IRQ 52 Jan 20 23:51:49.501916 kernel: pcieport 0000:00:01.2: AER: enabled with IRQ 52 Jan 20 23:51:49.502001 kernel: pcieport 0000:00:01.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:51:49.502089 kernel: pcieport 0000:00:01.3: PME: Signaling with IRQ 53 Jan 20 23:51:49.502172 kernel: pcieport 0000:00:01.3: AER: enabled with IRQ 53 Jan 20 23:51:49.502261 kernel: pcieport 0000:00:01.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:51:49.502346 kernel: pcieport 0000:00:01.4: PME: Signaling with IRQ 54 Jan 20 23:51:49.502429 kernel: pcieport 0000:00:01.4: AER: enabled with IRQ 54 Jan 20 23:51:49.502528 kernel: pcieport 0000:00:01.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:51:49.502636 kernel: pcieport 0000:00:01.5: PME: Signaling with IRQ 55 Jan 20 23:51:49.502726 kernel: pcieport 0000:00:01.5: AER: enabled with IRQ 55 Jan 20 23:51:49.502812 kernel: pcieport 0000:00:01.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:51:49.502898 kernel: pcieport 0000:00:01.6: PME: Signaling with IRQ 56 Jan 20 23:51:49.502981 kernel: pcieport 0000:00:01.6: AER: enabled with IRQ 56 Jan 20 23:51:49.503063 kernel: pcieport 0000:00:01.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:51:49.503152 kernel: pcieport 0000:00:01.7: PME: Signaling with IRQ 57 Jan 20 23:51:49.503234 kernel: pcieport 0000:00:01.7: AER: enabled with IRQ 57 Jan 20 23:51:49.503316 kernel: pcieport 0000:00:01.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:51:49.503328 kernel: ACPI: \_SB_.PCI0.GSI2: Enabled at IRQ 37 Jan 20 23:51:49.503410 kernel: pcieport 0000:00:02.0: PME: Signaling with IRQ 58 Jan 20 23:51:49.503513 kernel: pcieport 0000:00:02.0: AER: enabled with IRQ 58 Jan 20 23:51:49.503602 kernel: pcieport 0000:00:02.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:51:49.503692 kernel: pcieport 0000:00:02.1: PME: Signaling with IRQ 59 Jan 20 23:51:49.503777 kernel: pcieport 0000:00:02.1: AER: enabled with IRQ 59 Jan 20 23:51:49.503862 kernel: pcieport 0000:00:02.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:51:49.503947 kernel: pcieport 0000:00:02.2: PME: Signaling with IRQ 60 Jan 20 23:51:49.504029 kernel: pcieport 0000:00:02.2: AER: enabled with IRQ 60 Jan 20 23:51:49.504114 kernel: pcieport 0000:00:02.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:51:49.504200 kernel: pcieport 0000:00:02.3: PME: Signaling with IRQ 61 Jan 20 23:51:49.504287 kernel: pcieport 0000:00:02.3: AER: enabled with IRQ 61 Jan 20 23:51:49.504371 kernel: pcieport 0000:00:02.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:51:49.504472 kernel: pcieport 0000:00:02.4: PME: Signaling with IRQ 62 Jan 20 23:51:49.504567 kernel: pcieport 0000:00:02.4: AER: enabled with IRQ 62 Jan 20 23:51:49.504651 kernel: pcieport 0000:00:02.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:51:49.504742 kernel: pcieport 0000:00:02.5: PME: Signaling with IRQ 63 Jan 20 23:51:49.504845 kernel: pcieport 0000:00:02.5: AER: enabled with IRQ 63 Jan 20 23:51:49.504935 kernel: pcieport 0000:00:02.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:51:49.505023 kernel: pcieport 0000:00:02.6: PME: Signaling with IRQ 64 Jan 20 23:51:49.505107 kernel: pcieport 0000:00:02.6: AER: enabled with IRQ 64 Jan 20 23:51:49.505190 kernel: pcieport 0000:00:02.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:51:49.505279 kernel: pcieport 0000:00:02.7: PME: Signaling with IRQ 65 Jan 20 23:51:49.505363 kernel: pcieport 0000:00:02.7: AER: enabled with IRQ 65 Jan 20 23:51:49.505446 kernel: pcieport 0000:00:02.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:51:49.505474 kernel: ACPI: \_SB_.PCI0.GSI3: Enabled at IRQ 38 Jan 20 23:51:49.505567 kernel: pcieport 0000:00:03.0: PME: Signaling with IRQ 66 Jan 20 23:51:49.505654 kernel: pcieport 0000:00:03.0: AER: enabled with IRQ 66 Jan 20 23:51:49.505742 kernel: pcieport 0000:00:03.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:51:49.505828 kernel: pcieport 0000:00:03.1: PME: Signaling with IRQ 67 Jan 20 23:51:49.505913 kernel: pcieport 0000:00:03.1: AER: enabled with IRQ 67 Jan 20 23:51:49.505997 kernel: pcieport 0000:00:03.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:51:49.506085 kernel: pcieport 0000:00:03.2: PME: Signaling with IRQ 68 Jan 20 23:51:49.506171 kernel: pcieport 0000:00:03.2: AER: enabled with IRQ 68 Jan 20 23:51:49.506274 kernel: pcieport 0000:00:03.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:51:49.506364 kernel: pcieport 0000:00:03.3: PME: Signaling with IRQ 69 Jan 20 23:51:49.506449 kernel: pcieport 0000:00:03.3: AER: enabled with IRQ 69 Jan 20 23:51:49.506549 kernel: pcieport 0000:00:03.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:51:49.506635 kernel: pcieport 0000:00:03.4: PME: Signaling with IRQ 70 Jan 20 23:51:49.506718 kernel: pcieport 0000:00:03.4: AER: enabled with IRQ 70 Jan 20 23:51:49.506800 kernel: pcieport 0000:00:03.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:51:49.506889 kernel: pcieport 0000:00:03.5: PME: Signaling with IRQ 71 Jan 20 23:51:49.506971 kernel: pcieport 0000:00:03.5: AER: enabled with IRQ 71 Jan 20 23:51:49.507053 kernel: pcieport 0000:00:03.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:51:49.507138 kernel: pcieport 0000:00:03.6: PME: Signaling with IRQ 72 Jan 20 23:51:49.507221 kernel: pcieport 0000:00:03.6: AER: enabled with IRQ 72 Jan 20 23:51:49.507309 kernel: pcieport 0000:00:03.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:51:49.507399 kernel: pcieport 0000:00:03.7: PME: Signaling with IRQ 73 Jan 20 23:51:49.507506 kernel: pcieport 0000:00:03.7: AER: enabled with IRQ 73 Jan 20 23:51:49.507592 kernel: pcieport 0000:00:03.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:51:49.507604 kernel: ACPI: \_SB_.PCI0.GSI0: Enabled at IRQ 35 Jan 20 23:51:49.507689 kernel: pcieport 0000:00:04.0: PME: Signaling with IRQ 74 Jan 20 23:51:49.507772 kernel: pcieport 0000:00:04.0: AER: enabled with IRQ 74 Jan 20 23:51:49.507853 kernel: pcieport 0000:00:04.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:51:49.507942 kernel: pcieport 0000:00:04.1: PME: Signaling with IRQ 75 Jan 20 23:51:49.508055 kernel: pcieport 0000:00:04.1: AER: enabled with IRQ 75 Jan 20 23:51:49.508146 kernel: pcieport 0000:00:04.1: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:51:49.508233 kernel: pcieport 0000:00:04.2: PME: Signaling with IRQ 76 Jan 20 23:51:49.508316 kernel: pcieport 0000:00:04.2: AER: enabled with IRQ 76 Jan 20 23:51:49.508400 kernel: pcieport 0000:00:04.2: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:51:49.508503 kernel: pcieport 0000:00:04.3: PME: Signaling with IRQ 77 Jan 20 23:51:49.508590 kernel: pcieport 0000:00:04.3: AER: enabled with IRQ 77 Jan 20 23:51:49.508674 kernel: pcieport 0000:00:04.3: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:51:49.508760 kernel: pcieport 0000:00:04.4: PME: Signaling with IRQ 78 Jan 20 23:51:49.508863 kernel: pcieport 0000:00:04.4: AER: enabled with IRQ 78 Jan 20 23:51:49.508952 kernel: pcieport 0000:00:04.4: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:51:49.509045 kernel: pcieport 0000:00:04.5: PME: Signaling with IRQ 79 Jan 20 23:51:49.509131 kernel: pcieport 0000:00:04.5: AER: enabled with IRQ 79 Jan 20 23:51:49.509213 kernel: pcieport 0000:00:04.5: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:51:49.509300 kernel: pcieport 0000:00:04.6: PME: Signaling with IRQ 80 Jan 20 23:51:49.509384 kernel: pcieport 0000:00:04.6: AER: enabled with IRQ 80 Jan 20 23:51:49.509487 kernel: pcieport 0000:00:04.6: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:51:49.509585 kernel: pcieport 0000:00:04.7: PME: Signaling with IRQ 81 Jan 20 23:51:49.509673 kernel: pcieport 0000:00:04.7: AER: enabled with IRQ 81 Jan 20 23:51:49.509756 kernel: pcieport 0000:00:04.7: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:51:49.509843 kernel: pcieport 0000:00:05.0: PME: Signaling with IRQ 82 Jan 20 23:51:49.509926 kernel: pcieport 0000:00:05.0: AER: enabled with IRQ 82 Jan 20 23:51:49.510008 kernel: pcieport 0000:00:05.0: pciehp: Slot #0 AttnBtn+ PwrCtrl+ MRL- AttnInd+ PwrInd+ HotPlug+ Surprise+ Interlock+ NoCompl- IbPresDis- LLActRep+ Jan 20 23:51:49.510019 kernel: input: Power Button as /devices/LNXSYSTM:00/LNXSYBUS:00/PNP0C0C:00/input/input0 Jan 20 23:51:49.510030 kernel: ACPI: button: Power Button [PWRB] Jan 20 23:51:49.510121 kernel: virtio-pci 0000:01:00.0: enabling device (0000 -> 0002) Jan 20 23:51:49.510215 kernel: virtio-pci 0000:04:00.0: enabling device (0000 -> 0002) Jan 20 23:51:49.510228 kernel: Serial: 8250/16550 driver, 4 ports, IRQ sharing enabled Jan 20 23:51:49.510236 kernel: thunder_xcv, ver 1.0 Jan 20 23:51:49.510244 kernel: thunder_bgx, ver 1.0 Jan 20 23:51:49.510252 kernel: nicpf, ver 1.0 Jan 20 23:51:49.510263 kernel: nicvf, ver 1.0 Jan 20 23:51:49.510382 kernel: rtc-efi rtc-efi.0: registered as rtc0 Jan 20 23:51:49.510482 kernel: rtc-efi rtc-efi.0: setting system clock to 2026-01-20T23:51:48 UTC (1768953108) Jan 20 23:51:49.510494 kernel: hid: raw HID events driver (C) Jiri Kosina Jan 20 23:51:49.510503 kernel: hw perfevents: enabled with armv8_pmuv3_0 PMU driver, 7 (0,8000003f) counters available Jan 20 23:51:49.510511 kernel: watchdog: NMI not fully supported Jan 20 23:51:49.510521 kernel: watchdog: Hard watchdog permanently disabled Jan 20 23:51:49.510530 kernel: NET: Registered PF_INET6 protocol family Jan 20 23:51:49.510538 kernel: Segment Routing with IPv6 Jan 20 23:51:49.510546 kernel: In-situ OAM (IOAM) with IPv6 Jan 20 23:51:49.510554 kernel: NET: Registered PF_PACKET protocol family Jan 20 23:51:49.510562 kernel: Key type dns_resolver registered Jan 20 23:51:49.510571 kernel: registered taskstats version 1 Jan 20 23:51:49.510581 kernel: Loading compiled-in X.509 certificates Jan 20 23:51:49.510589 kernel: Loaded X.509 cert 'Kinvolk GmbH: Module signing key for 6.12.66-flatcar: ae4cb0460a35d8e9b47e83cc3a018fffd2136c96' Jan 20 23:51:49.510597 kernel: Demotion targets for Node 0: null Jan 20 23:51:49.510605 kernel: Key type .fscrypt registered Jan 20 23:51:49.510613 kernel: Key type fscrypt-provisioning registered Jan 20 23:51:49.510621 kernel: ima: No TPM chip found, activating TPM-bypass! Jan 20 23:51:49.510629 kernel: ima: Allocated hash algorithm: sha1 Jan 20 23:51:49.510637 kernel: ima: No architecture policies found Jan 20 23:51:49.510646 kernel: alg: No test for fips(ansi_cprng) (fips_ansi_cprng) Jan 20 23:51:49.510654 kernel: clk: Disabling unused clocks Jan 20 23:51:49.510662 kernel: PM: genpd: Disabling unused power domains Jan 20 23:51:49.510670 kernel: Freeing unused kernel memory: 12480K Jan 20 23:51:49.510678 kernel: Run /init as init process Jan 20 23:51:49.510686 kernel: with arguments: Jan 20 23:51:49.510695 kernel: /init Jan 20 23:51:49.510704 kernel: with environment: Jan 20 23:51:49.510712 kernel: HOME=/ Jan 20 23:51:49.510720 kernel: TERM=linux Jan 20 23:51:49.510728 kernel: ACPI: bus type USB registered Jan 20 23:51:49.510736 kernel: usbcore: registered new interface driver usbfs Jan 20 23:51:49.510744 kernel: usbcore: registered new interface driver hub Jan 20 23:51:49.510753 kernel: usbcore: registered new device driver usb Jan 20 23:51:49.510848 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 20 23:51:49.510939 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 1 Jan 20 23:51:49.511025 kernel: xhci_hcd 0000:02:00.0: hcc params 0x00087001 hci version 0x100 quirks 0x0000000000000010 Jan 20 23:51:49.511109 kernel: xhci_hcd 0000:02:00.0: xHCI Host Controller Jan 20 23:51:49.511193 kernel: xhci_hcd 0000:02:00.0: new USB bus registered, assigned bus number 2 Jan 20 23:51:49.511277 kernel: xhci_hcd 0000:02:00.0: Host supports USB 3.0 SuperSpeed Jan 20 23:51:49.511395 kernel: hub 1-0:1.0: USB hub found Jan 20 23:51:49.511527 kernel: hub 1-0:1.0: 4 ports detected Jan 20 23:51:49.511641 kernel: usb usb2: We don't know the algorithms for LPM for this host, disabling LPM. Jan 20 23:51:49.511750 kernel: hub 2-0:1.0: USB hub found Jan 20 23:51:49.511843 kernel: hub 2-0:1.0: 4 ports detected Jan 20 23:51:49.511952 kernel: virtio_blk virtio1: 4/0/0 default/read/poll queues Jan 20 23:51:49.512042 kernel: virtio_blk virtio1: [vda] 104857600 512-byte logical blocks (53.7 GB/50.0 GiB) Jan 20 23:51:49.512054 kernel: GPT:Primary header thinks Alt. header is not at the end of the disk. Jan 20 23:51:49.512063 kernel: GPT:25804799 != 104857599 Jan 20 23:51:49.512072 kernel: GPT:Alternate GPT header not at the end of the disk. Jan 20 23:51:49.512080 kernel: GPT:25804799 != 104857599 Jan 20 23:51:49.512089 kernel: GPT: Use GNU Parted to correct GPT errors. Jan 20 23:51:49.512100 kernel: vda: vda1 vda2 vda3 vda4 vda6 vda7 vda9 Jan 20 23:51:49.512108 kernel: SCSI subsystem initialized Jan 20 23:51:49.512117 kernel: device-mapper: core: CONFIG_IMA_DISABLE_HTABLE is disabled. Duplicate IMA measurements will not be recorded in the IMA log. Jan 20 23:51:49.512125 kernel: device-mapper: uevent: version 1.0.3 Jan 20 23:51:49.512134 kernel: device-mapper: ioctl: 4.48.0-ioctl (2023-03-01) initialised: dm-devel@lists.linux.dev Jan 20 23:51:49.512143 kernel: device-mapper: verity: sha256 using shash "sha256-ce" Jan 20 23:51:49.512152 kernel: raid6: neonx8 gen() 15748 MB/s Jan 20 23:51:49.512161 kernel: raid6: neonx4 gen() 15681 MB/s Jan 20 23:51:49.512169 kernel: raid6: neonx2 gen() 13202 MB/s Jan 20 23:51:49.512177 kernel: raid6: neonx1 gen() 10303 MB/s Jan 20 23:51:49.512185 kernel: raid6: int64x8 gen() 6791 MB/s Jan 20 23:51:49.512194 kernel: raid6: int64x4 gen() 7299 MB/s Jan 20 23:51:49.512202 kernel: raid6: int64x2 gen() 6029 MB/s Jan 20 23:51:49.512210 kernel: raid6: int64x1 gen() 5028 MB/s Jan 20 23:51:49.512220 kernel: raid6: using algorithm neonx8 gen() 15748 MB/s Jan 20 23:51:49.512228 kernel: raid6: .... xor() 12014 MB/s, rmw enabled Jan 20 23:51:49.512236 kernel: raid6: using neon recovery algorithm Jan 20 23:51:49.512245 kernel: xor: measuring software checksum speed Jan 20 23:51:49.512255 kernel: 8regs : 21653 MB/sec Jan 20 23:51:49.512264 kernel: 32regs : 21699 MB/sec Jan 20 23:51:49.512273 kernel: arm64_neon : 27691 MB/sec Jan 20 23:51:49.512282 kernel: xor: using function: arm64_neon (27691 MB/sec) Jan 20 23:51:49.512385 kernel: usb 1-1: new high-speed USB device number 2 using xhci_hcd Jan 20 23:51:49.512398 kernel: Btrfs loaded, zoned=no, fsverity=no Jan 20 23:51:49.512407 kernel: BTRFS: device fsid c7d7174b-f392-4c72-bb61-0601db27f9ed devid 1 transid 34 /dev/mapper/usr (253:0) scanned by mount (275) Jan 20 23:51:49.512416 kernel: BTRFS info (device dm-0): first mount of filesystem c7d7174b-f392-4c72-bb61-0601db27f9ed Jan 20 23:51:49.512427 kernel: BTRFS info (device dm-0): using crc32c (crc32c-generic) checksum algorithm Jan 20 23:51:49.512435 kernel: BTRFS info (device dm-0): disabling log replay at mount time Jan 20 23:51:49.512444 kernel: BTRFS info (device dm-0): enabling free space tree Jan 20 23:51:49.512465 kernel: loop: module loaded Jan 20 23:51:49.512476 kernel: loop0: detected capacity change from 0 to 91840 Jan 20 23:51:49.512487 kernel: squashfs: version 4.0 (2009/01/31) Phillip Lougher Jan 20 23:51:49.512625 kernel: usb 1-2: new high-speed USB device number 3 using xhci_hcd Jan 20 23:51:49.512643 systemd[1]: Successfully made /usr/ read-only. Jan 20 23:51:49.512655 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 23:51:49.512664 systemd[1]: Detected virtualization kvm. Jan 20 23:51:49.512673 systemd[1]: Detected architecture arm64. Jan 20 23:51:49.512681 systemd[1]: Running in initrd. Jan 20 23:51:49.512690 systemd[1]: No hostname configured, using default hostname. Jan 20 23:51:49.512700 systemd[1]: Hostname set to . Jan 20 23:51:49.512709 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 23:51:49.512718 systemd[1]: Queued start job for default target initrd.target. Jan 20 23:51:49.512727 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 23:51:49.512735 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 23:51:49.512744 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 23:51:49.512756 systemd[1]: Expecting device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM... Jan 20 23:51:49.512765 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 23:51:49.512775 systemd[1]: Expecting device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT... Jan 20 23:51:49.512784 systemd[1]: Expecting device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A... Jan 20 23:51:49.512793 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 23:51:49.512802 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 23:51:49.512824 systemd[1]: Reached target initrd-usr-fs.target - Initrd /usr File System. Jan 20 23:51:49.512833 systemd[1]: Reached target paths.target - Path Units. Jan 20 23:51:49.512842 systemd[1]: Reached target slices.target - Slice Units. Jan 20 23:51:49.512851 systemd[1]: Reached target swap.target - Swaps. Jan 20 23:51:49.512860 systemd[1]: Reached target timers.target - Timer Units. Jan 20 23:51:49.512868 systemd[1]: Listening on iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 23:51:49.512877 systemd[1]: Listening on iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 23:51:49.512888 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 23:51:49.512898 systemd[1]: Listening on systemd-journald-dev-log.socket - Journal Socket (/dev/log). Jan 20 23:51:49.512906 systemd[1]: Listening on systemd-journald.socket - Journal Sockets. Jan 20 23:51:49.512915 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 23:51:49.512924 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 23:51:49.512933 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 23:51:49.512943 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 23:51:49.512952 systemd[1]: afterburn-network-kargs.service - Afterburn Initrd Setup Network Kernel Arguments was skipped because no trigger condition checks were met. Jan 20 23:51:49.512961 systemd[1]: Starting ignition-setup-pre.service - Ignition env setup... Jan 20 23:51:49.512970 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 23:51:49.512979 systemd[1]: Finished network-cleanup.service - Network Cleanup. Jan 20 23:51:49.512988 systemd[1]: systemd-battery-check.service - Check battery level during early boot was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/class/power_supply). Jan 20 23:51:49.512997 systemd[1]: Starting systemd-fsck-usr.service... Jan 20 23:51:49.513007 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 23:51:49.513016 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 23:51:49.513025 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 23:51:49.513034 systemd[1]: Finished ignition-setup-pre.service - Ignition env setup. Jan 20 23:51:49.513044 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 23:51:49.513053 systemd[1]: Finished systemd-fsck-usr.service. Jan 20 23:51:49.513062 systemd[1]: Starting systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully... Jan 20 23:51:49.513096 systemd-journald[418]: Collecting audit messages is enabled. Jan 20 23:51:49.513119 kernel: bridge: filtering via arp/ip/ip6tables is no longer available by default. Update your scripts to load br_netfilter if you need this. Jan 20 23:51:49.513128 kernel: Bridge firewalling registered Jan 20 23:51:49.513137 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 23:51:49.513146 kernel: audit: type=1130 audit(1768953109.451:2): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:49.513155 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 23:51:49.513166 kernel: audit: type=1130 audit(1768953109.455:3): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:49.513174 systemd[1]: Starting dracut-cmdline-ask.service - dracut ask for additional cmdline parameters... Jan 20 23:51:49.513183 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 23:51:49.513192 systemd[1]: Finished systemd-tmpfiles-setup-dev-early.service - Create Static Device Nodes in /dev gracefully. Jan 20 23:51:49.513201 kernel: audit: type=1130 audit(1768953109.476:4): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:49.513210 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 23:51:49.513220 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 23:51:49.513230 kernel: audit: type=1130 audit(1768953109.498:5): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:49.513240 systemd[1]: Finished dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 23:51:49.513249 kernel: audit: type=1130 audit(1768953109.503:6): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:49.513258 systemd[1]: Starting dracut-cmdline.service - dracut cmdline hook... Jan 20 23:51:49.513267 kernel: audit: type=1334 audit(1768953109.507:7): prog-id=6 op=LOAD Jan 20 23:51:49.513275 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 23:51:49.513286 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 23:51:49.513295 systemd-journald[418]: Journal started Jan 20 23:51:49.513314 systemd-journald[418]: Runtime Journal (/run/log/journal/16425fea29554372b76ac9a5ba5ba762) is 8M, max 319.5M, 311.5M free. Jan 20 23:51:49.522545 kernel: audit: type=1130 audit(1768953109.514:8): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:49.451000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:49.455000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:49.476000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev-early comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:49.498000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:49.503000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:49.507000 audit: BPF prog-id=6 op=LOAD Jan 20 23:51:49.514000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:49.448612 systemd-modules-load[420]: Inserted module 'br_netfilter' Jan 20 23:51:49.525103 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 23:51:49.524000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:49.529282 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 23:51:49.532653 kernel: audit: type=1130 audit(1768953109.524:9): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:49.538019 dracut-cmdline[445]: Using kernel command line parameters: rd.driver.pre=btrfs SYSTEMD_SULOGIN_FORCE=1 BOOT_IMAGE=/flatcar/vmlinuz-a mount.usr=/dev/mapper/usr verity.usr=PARTUUID=7130c94a-213a-4e5a-8e26-6cce9662f132 rootflags=rw mount.usrflags=ro consoleblank=0 root=LABEL=ROOT console=ttyAMA0,115200n8 flatcar.first_boot=detected acpi=force flatcar.oem.id=openstack verity.usrhash=3c423a3ed4865abab898483a94535823dbc3dcf7b9fc4db9a9e44dcb3b3370eb Jan 20 23:51:49.540035 systemd-tmpfiles[460]: /usr/lib/tmpfiles.d/var.conf:14: Duplicate line for path "/var/log", ignoring. Jan 20 23:51:49.545323 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 23:51:49.546000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:49.550486 kernel: audit: type=1130 audit(1768953109.546:10): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:49.569735 systemd-resolved[446]: Positive Trust Anchors: Jan 20 23:51:49.569753 systemd-resolved[446]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 23:51:49.569756 systemd-resolved[446]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 23:51:49.569788 systemd-resolved[446]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 23:51:49.596121 systemd-resolved[446]: Defaulting to hostname 'linux'. Jan 20 23:51:49.597045 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 23:51:49.597000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:49.598199 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 23:51:49.638489 kernel: Loading iSCSI transport class v2.0-870. Jan 20 23:51:49.650525 kernel: iscsi: registered transport (tcp) Jan 20 23:51:49.665506 kernel: iscsi: registered transport (qla4xxx) Jan 20 23:51:49.665530 kernel: QLogic iSCSI HBA Driver Jan 20 23:51:49.688586 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 23:51:49.705477 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 23:51:49.708000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:49.708943 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 23:51:49.759036 systemd[1]: Finished dracut-cmdline.service - dracut cmdline hook. Jan 20 23:51:49.759000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:49.761532 systemd[1]: Starting dracut-pre-udev.service - dracut pre-udev hook... Jan 20 23:51:49.763137 systemd[1]: Starting parse-ip-for-networkd.service - Write systemd-networkd units from cmdline... Jan 20 23:51:49.799550 systemd[1]: Finished dracut-pre-udev.service - dracut pre-udev hook. Jan 20 23:51:49.800000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:49.801000 audit: BPF prog-id=7 op=LOAD Jan 20 23:51:49.801000 audit: BPF prog-id=8 op=LOAD Jan 20 23:51:49.802201 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 23:51:49.833715 systemd-udevd[694]: Using default interface naming scheme 'v257'. Jan 20 23:51:49.841905 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 23:51:49.842000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:49.844360 systemd[1]: Starting dracut-pre-trigger.service - dracut pre-trigger hook... Jan 20 23:51:49.870202 dracut-pre-trigger[760]: rd.md=0: removing MD RAID activation Jan 20 23:51:49.878519 systemd[1]: Finished parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 23:51:49.879000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:49.880000 audit: BPF prog-id=9 op=LOAD Jan 20 23:51:49.881257 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 23:51:49.900674 systemd[1]: Finished dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 23:51:49.901000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:49.903400 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 23:51:49.936257 systemd-networkd[815]: lo: Link UP Jan 20 23:51:49.937144 systemd-networkd[815]: lo: Gained carrier Jan 20 23:51:49.938000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:49.937726 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 23:51:49.938737 systemd[1]: Reached target network.target - Network. Jan 20 23:51:49.983623 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 23:51:49.984000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:49.986534 systemd[1]: Starting dracut-initqueue.service - dracut initqueue hook... Jan 20 23:51:50.055778 systemd[1]: Found device dev-disk-by\x2dlabel-EFI\x2dSYSTEM.device - /dev/disk/by-label/EFI-SYSTEM. Jan 20 23:51:50.069462 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 23:51:50.082888 systemd[1]: Found device dev-disk-by\x2dlabel-ROOT.device - /dev/disk/by-label/ROOT. Jan 20 23:51:50.092085 systemd[1]: Found device dev-disk-by\x2dpartlabel-USR\x2dA.device - /dev/disk/by-partlabel/USR-A. Jan 20 23:51:50.099862 systemd[1]: Starting disk-uuid.service - Generate new UUID for disk GPT if necessary... Jan 20 23:51:50.102467 kernel: input: QEMU QEMU USB Tablet as /devices/pci0000:00/0000:00:01.1/0000:02:00.0/usb1/1-1/1-1:1.0/0003:0627:0001.0001/input/input1 Jan 20 23:51:50.104481 kernel: hid-generic 0003:0627:0001.0001: input,hidraw0: USB HID v0.01 Mouse [QEMU QEMU USB Tablet] on usb-0000:02:00.0-1/input0 Jan 20 23:51:50.117485 kernel: input: QEMU QEMU USB Keyboard as /devices/pci0000:00/0000:00:01.1/0000:02:00.0/usb1/1-2/1-2:1.0/0003:0627:0001.0002/input/input2 Jan 20 23:51:50.120523 disk-uuid[876]: Primary Header is updated. Jan 20 23:51:50.120523 disk-uuid[876]: Secondary Entries is updated. Jan 20 23:51:50.120523 disk-uuid[876]: Secondary Header is updated. Jan 20 23:51:50.125000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:50.123356 systemd-networkd[815]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 23:51:50.123360 systemd-networkd[815]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 23:51:50.124066 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 23:51:50.124178 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 23:51:50.125606 systemd[1]: Stopping systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 23:51:50.126025 systemd-networkd[815]: eth0: Link UP Jan 20 23:51:50.126504 systemd-networkd[815]: eth0: Gained carrier Jan 20 23:51:50.126518 systemd-networkd[815]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 23:51:50.130743 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 23:51:50.162473 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 23:51:50.163000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:50.170697 kernel: hid-generic 0003:0627:0001.0002: input,hidraw1: USB HID v1.11 Keyboard [QEMU QEMU USB Keyboard] on usb-0000:02:00.0-2/input0 Jan 20 23:51:50.170935 kernel: usbcore: registered new interface driver usbhid Jan 20 23:51:50.170948 kernel: usbhid: USB HID core driver Jan 20 23:51:50.188589 systemd-networkd[815]: eth0: DHCPv4 address 10.0.2.209/25, gateway 10.0.2.129 acquired from 10.0.2.129 Jan 20 23:51:50.225290 systemd[1]: Finished dracut-initqueue.service - dracut initqueue hook. Jan 20 23:51:50.226000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:50.227581 systemd[1]: Reached target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 23:51:50.228651 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 23:51:50.230434 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 23:51:50.233253 systemd[1]: Starting dracut-pre-mount.service - dracut pre-mount hook... Jan 20 23:51:50.256830 systemd[1]: Finished dracut-pre-mount.service - dracut pre-mount hook. Jan 20 23:51:50.257000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:51.184728 disk-uuid[879]: Warning: The kernel is still using the old partition table. Jan 20 23:51:51.184728 disk-uuid[879]: The new table will be used at the next reboot or after you Jan 20 23:51:51.184728 disk-uuid[879]: run partprobe(8) or kpartx(8) Jan 20 23:51:51.184728 disk-uuid[879]: The operation has completed successfully. Jan 20 23:51:51.190125 systemd[1]: disk-uuid.service: Deactivated successfully. Jan 20 23:51:51.190000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:51.190000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=disk-uuid comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:51.190234 systemd[1]: Finished disk-uuid.service - Generate new UUID for disk GPT if necessary. Jan 20 23:51:51.192334 systemd[1]: Starting ignition-setup.service - Ignition (setup)... Jan 20 23:51:51.235510 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (910) Jan 20 23:51:51.238011 kernel: BTRFS info (device vda6): first mount of filesystem dfc57a4b-47e0-40ee-b63c-50625c8a8124 Jan 20 23:51:51.238084 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 20 23:51:51.244689 kernel: BTRFS info (device vda6): turning on async discard Jan 20 23:51:51.244742 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 23:51:51.250475 kernel: BTRFS info (device vda6): last unmount of filesystem dfc57a4b-47e0-40ee-b63c-50625c8a8124 Jan 20 23:51:51.253552 systemd[1]: Finished ignition-setup.service - Ignition (setup). Jan 20 23:51:51.254000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:51.255758 systemd[1]: Starting ignition-fetch-offline.service - Ignition (fetch-offline)... Jan 20 23:51:51.435261 ignition[929]: Ignition 2.24.0 Jan 20 23:51:51.435275 ignition[929]: Stage: fetch-offline Jan 20 23:51:51.435316 ignition[929]: no configs at "/usr/lib/ignition/base.d" Jan 20 23:51:51.435326 ignition[929]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 23:51:51.438177 systemd[1]: Finished ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 23:51:51.439000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:51.435582 ignition[929]: parsed url from cmdline: "" Jan 20 23:51:51.435586 ignition[929]: no config URL provided Jan 20 23:51:51.436373 ignition[929]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 23:51:51.441819 systemd[1]: Starting ignition-fetch.service - Ignition (fetch)... Jan 20 23:51:51.436383 ignition[929]: no config at "/usr/lib/ignition/user.ign" Jan 20 23:51:51.436388 ignition[929]: failed to fetch config: resource requires networking Jan 20 23:51:51.436573 ignition[929]: Ignition finished successfully Jan 20 23:51:51.465581 ignition[943]: Ignition 2.24.0 Jan 20 23:51:51.465601 ignition[943]: Stage: fetch Jan 20 23:51:51.465747 ignition[943]: no configs at "/usr/lib/ignition/base.d" Jan 20 23:51:51.465755 ignition[943]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 23:51:51.465836 ignition[943]: parsed url from cmdline: "" Jan 20 23:51:51.465840 ignition[943]: no config URL provided Jan 20 23:51:51.465844 ignition[943]: reading system config file "/usr/lib/ignition/user.ign" Jan 20 23:51:51.465850 ignition[943]: no config at "/usr/lib/ignition/user.ign" Jan 20 23:51:51.466018 ignition[943]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 20 23:51:51.466036 ignition[943]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 20 23:51:51.466042 ignition[943]: GET http://169.254.169.254/openstack/latest/user_data: attempt #1 Jan 20 23:51:52.022858 systemd-networkd[815]: eth0: Gained IPv6LL Jan 20 23:51:52.466562 ignition[943]: config drive ("/dev/disk/by-label/config-2") not found. Waiting... Jan 20 23:51:52.466584 ignition[943]: config drive ("/dev/disk/by-label/CONFIG-2") not found. Waiting... Jan 20 23:51:53.160907 ignition[943]: GET result: OK Jan 20 23:51:53.161029 ignition[943]: parsing config with SHA512: 9cb5cfdae25054d3572a6e5c57ae806f520e027b9ed6b13d633c5376453bd1b077c586f401fd1f3ea6b1437152a31622bfca0c3e28ccb27f3a18c964cfabb4ad Jan 20 23:51:53.165824 unknown[943]: fetched base config from "system" Jan 20 23:51:53.165835 unknown[943]: fetched base config from "system" Jan 20 23:51:53.166164 ignition[943]: fetch: fetch complete Jan 20 23:51:53.165841 unknown[943]: fetched user config from "openstack" Jan 20 23:51:53.166169 ignition[943]: fetch: fetch passed Jan 20 23:51:53.166207 ignition[943]: Ignition finished successfully Jan 20 23:51:53.169423 systemd[1]: Finished ignition-fetch.service - Ignition (fetch). Jan 20 23:51:53.176948 kernel: kauditd_printk_skb: 20 callbacks suppressed Jan 20 23:51:53.176996 kernel: audit: type=1130 audit(1768953113.171:31): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:53.171000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:53.176104 systemd[1]: Starting ignition-kargs.service - Ignition (kargs)... Jan 20 23:51:53.217567 ignition[951]: Ignition 2.24.0 Jan 20 23:51:53.217586 ignition[951]: Stage: kargs Jan 20 23:51:53.217742 ignition[951]: no configs at "/usr/lib/ignition/base.d" Jan 20 23:51:53.217751 ignition[951]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 23:51:53.218536 ignition[951]: kargs: kargs passed Jan 20 23:51:53.221000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:53.224475 kernel: audit: type=1130 audit(1768953113.221:32): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:53.220903 systemd[1]: Finished ignition-kargs.service - Ignition (kargs). Jan 20 23:51:53.218587 ignition[951]: Ignition finished successfully Jan 20 23:51:53.223167 systemd[1]: Starting ignition-disks.service - Ignition (disks)... Jan 20 23:51:53.252666 ignition[958]: Ignition 2.24.0 Jan 20 23:51:53.252684 ignition[958]: Stage: disks Jan 20 23:51:53.252851 ignition[958]: no configs at "/usr/lib/ignition/base.d" Jan 20 23:51:53.252859 ignition[958]: no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 23:51:53.253625 ignition[958]: disks: disks passed Jan 20 23:51:53.255877 systemd[1]: Finished ignition-disks.service - Ignition (disks). Jan 20 23:51:53.256000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:53.253670 ignition[958]: Ignition finished successfully Jan 20 23:51:53.261190 kernel: audit: type=1130 audit(1768953113.256:33): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:53.257200 systemd[1]: Reached target initrd-root-device.target - Initrd Root Device. Jan 20 23:51:53.260628 systemd[1]: Reached target local-fs-pre.target - Preparation for Local File Systems. Jan 20 23:51:53.262186 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 23:51:53.263787 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 23:51:53.265412 systemd[1]: Reached target basic.target - Basic System. Jan 20 23:51:53.267900 systemd[1]: Starting systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT... Jan 20 23:51:53.323140 systemd-fsck[967]: ROOT: clean, 15/1631200 files, 112378/1617920 blocks Jan 20 23:51:53.328538 systemd[1]: Finished systemd-fsck-root.service - File System Check on /dev/disk/by-label/ROOT. Jan 20 23:51:53.331000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:53.333513 systemd[1]: Mounting sysroot.mount - /sysroot... Jan 20 23:51:53.336526 kernel: audit: type=1130 audit(1768953113.331:34): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:53.454518 kernel: EXT4-fs (vda9): mounted filesystem 81ddf123-ac73-4435-a963-542e3692f093 r/w with ordered data mode. Quota mode: none. Jan 20 23:51:53.455741 systemd[1]: Mounted sysroot.mount - /sysroot. Jan 20 23:51:53.456984 systemd[1]: Reached target initrd-root-fs.target - Initrd Root File System. Jan 20 23:51:53.461057 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 23:51:53.462948 systemd[1]: Mounting sysroot-usr.mount - /sysroot/usr... Jan 20 23:51:53.463876 systemd[1]: flatcar-metadata-hostname.service - Flatcar Metadata Hostname Agent was skipped because no trigger condition checks were met. Jan 20 23:51:53.465608 systemd[1]: Starting flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent... Jan 20 23:51:53.466917 systemd[1]: ignition-remount-sysroot.service - Remount /sysroot read-write for Ignition was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/sysroot). Jan 20 23:51:53.466953 systemd[1]: Reached target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 23:51:53.472296 systemd[1]: Mounted sysroot-usr.mount - /sysroot/usr. Jan 20 23:51:53.474686 systemd[1]: Starting initrd-setup-root.service - Root filesystem setup... Jan 20 23:51:53.487517 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (975) Jan 20 23:51:53.490067 kernel: BTRFS info (device vda6): first mount of filesystem dfc57a4b-47e0-40ee-b63c-50625c8a8124 Jan 20 23:51:53.490119 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 20 23:51:53.494983 kernel: BTRFS info (device vda6): turning on async discard Jan 20 23:51:53.495044 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 23:51:53.497013 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 23:51:53.542493 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 23:51:53.684185 systemd[1]: Finished initrd-setup-root.service - Root filesystem setup. Jan 20 23:51:53.684000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:53.688350 systemd[1]: Starting ignition-mount.service - Ignition (mount)... Jan 20 23:51:53.690093 kernel: audit: type=1130 audit(1768953113.684:35): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:53.690085 systemd[1]: Starting sysroot-boot.service - /sysroot/boot... Jan 20 23:51:53.706608 systemd[1]: sysroot-oem.mount: Deactivated successfully. Jan 20 23:51:53.708099 kernel: BTRFS info (device vda6): last unmount of filesystem dfc57a4b-47e0-40ee-b63c-50625c8a8124 Jan 20 23:51:53.733976 systemd[1]: Finished sysroot-boot.service - /sysroot/boot. Jan 20 23:51:53.734000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:53.738490 kernel: audit: type=1130 audit(1768953113.734:36): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:53.738938 ignition[1081]: INFO : Ignition 2.24.0 Jan 20 23:51:53.738938 ignition[1081]: INFO : Stage: mount Jan 20 23:51:53.741561 ignition[1081]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 23:51:53.741561 ignition[1081]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 23:51:53.741561 ignition[1081]: INFO : mount: mount passed Jan 20 23:51:53.741561 ignition[1081]: INFO : Ignition finished successfully Jan 20 23:51:53.747701 kernel: audit: type=1130 audit(1768953113.743:37): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:53.743000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:51:53.741944 systemd[1]: Finished ignition-mount.service - Ignition (mount). Jan 20 23:51:54.592561 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 23:51:56.601503 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 23:52:00.611524 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 23:52:00.614810 coreos-metadata[977]: Jan 20 23:52:00.614 WARN failed to locate config-drive, using the metadata service API instead Jan 20 23:52:00.633669 coreos-metadata[977]: Jan 20 23:52:00.633 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 20 23:52:02.353192 coreos-metadata[977]: Jan 20 23:52:02.353 INFO Fetch successful Jan 20 23:52:02.354357 coreos-metadata[977]: Jan 20 23:52:02.354 INFO wrote hostname ci-4547-0-0-n-e5b472a427 to /sysroot/etc/hostname Jan 20 23:52:02.356702 systemd[1]: flatcar-openstack-hostname.service: Deactivated successfully. Jan 20 23:52:02.358000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:02.357866 systemd[1]: Finished flatcar-openstack-hostname.service - Flatcar OpenStack Metadata Hostname Agent. Jan 20 23:52:02.364790 kernel: audit: type=1130 audit(1768953122.358:38): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:02.364818 kernel: audit: type=1131 audit(1768953122.358:39): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:02.358000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=flatcar-openstack-hostname comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:02.360085 systemd[1]: Starting ignition-files.service - Ignition (files)... Jan 20 23:52:02.389104 systemd[1]: Mounting sysroot-oem.mount - /sysroot/oem... Jan 20 23:52:02.410478 kernel: BTRFS: device label OEM devid 1 transid 11 /dev/vda6 (254:6) scanned by mount (1099) Jan 20 23:52:02.412502 kernel: BTRFS info (device vda6): first mount of filesystem dfc57a4b-47e0-40ee-b63c-50625c8a8124 Jan 20 23:52:02.412536 kernel: BTRFS info (device vda6): using crc32c (crc32c-generic) checksum algorithm Jan 20 23:52:02.417655 kernel: BTRFS info (device vda6): turning on async discard Jan 20 23:52:02.417681 kernel: BTRFS info (device vda6): enabling free space tree Jan 20 23:52:02.419135 systemd[1]: Mounted sysroot-oem.mount - /sysroot/oem. Jan 20 23:52:02.454916 ignition[1117]: INFO : Ignition 2.24.0 Jan 20 23:52:02.454916 ignition[1117]: INFO : Stage: files Jan 20 23:52:02.456539 ignition[1117]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 23:52:02.456539 ignition[1117]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 23:52:02.456539 ignition[1117]: DEBUG : files: compiled without relabeling support, skipping Jan 20 23:52:02.459500 ignition[1117]: INFO : files: ensureUsers: op(1): [started] creating or modifying user "core" Jan 20 23:52:02.459500 ignition[1117]: DEBUG : files: ensureUsers: op(1): executing: "usermod" "--root" "/sysroot" "core" Jan 20 23:52:02.462810 ignition[1117]: INFO : files: ensureUsers: op(1): [finished] creating or modifying user "core" Jan 20 23:52:02.464073 ignition[1117]: INFO : files: ensureUsers: op(2): [started] adding ssh keys to user "core" Jan 20 23:52:02.464073 ignition[1117]: INFO : files: ensureUsers: op(2): [finished] adding ssh keys to user "core" Jan 20 23:52:02.463566 unknown[1117]: wrote ssh authorized keys file for user: core Jan 20 23:52:02.468327 ignition[1117]: INFO : files: createFilesystemsFiles: createFiles: op(3): [started] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 20 23:52:02.470091 ignition[1117]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET https://get.helm.sh/helm-v3.17.3-linux-arm64.tar.gz: attempt #1 Jan 20 23:52:02.524033 ignition[1117]: INFO : files: createFilesystemsFiles: createFiles: op(3): GET result: OK Jan 20 23:52:02.628664 ignition[1117]: INFO : files: createFilesystemsFiles: createFiles: op(3): [finished] writing file "/sysroot/opt/helm-v3.17.3-linux-arm64.tar.gz" Jan 20 23:52:02.628664 ignition[1117]: INFO : files: createFilesystemsFiles: createFiles: op(4): [started] writing file "/sysroot/home/core/install.sh" Jan 20 23:52:02.632223 ignition[1117]: INFO : files: createFilesystemsFiles: createFiles: op(4): [finished] writing file "/sysroot/home/core/install.sh" Jan 20 23:52:02.632223 ignition[1117]: INFO : files: createFilesystemsFiles: createFiles: op(5): [started] writing file "/sysroot/home/core/nginx.yaml" Jan 20 23:52:02.632223 ignition[1117]: INFO : files: createFilesystemsFiles: createFiles: op(5): [finished] writing file "/sysroot/home/core/nginx.yaml" Jan 20 23:52:02.632223 ignition[1117]: INFO : files: createFilesystemsFiles: createFiles: op(6): [started] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 23:52:02.632223 ignition[1117]: INFO : files: createFilesystemsFiles: createFiles: op(6): [finished] writing file "/sysroot/home/core/nfs-pod.yaml" Jan 20 23:52:02.632223 ignition[1117]: INFO : files: createFilesystemsFiles: createFiles: op(7): [started] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 23:52:02.632223 ignition[1117]: INFO : files: createFilesystemsFiles: createFiles: op(7): [finished] writing file "/sysroot/home/core/nfs-pvc.yaml" Jan 20 23:52:02.632223 ignition[1117]: INFO : files: createFilesystemsFiles: createFiles: op(8): [started] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 23:52:02.632223 ignition[1117]: INFO : files: createFilesystemsFiles: createFiles: op(8): [finished] writing file "/sysroot/etc/flatcar/update.conf" Jan 20 23:52:02.632223 ignition[1117]: INFO : files: createFilesystemsFiles: createFiles: op(9): [started] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 20 23:52:02.632223 ignition[1117]: INFO : files: createFilesystemsFiles: createFiles: op(9): [finished] writing link "/sysroot/etc/extensions/kubernetes.raw" -> "/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 20 23:52:02.632223 ignition[1117]: INFO : files: createFilesystemsFiles: createFiles: op(a): [started] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 20 23:52:02.632223 ignition[1117]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET https://extensions.flatcar.org/extensions/kubernetes-v1.33.0-arm64.raw: attempt #1 Jan 20 23:52:02.908049 ignition[1117]: INFO : files: createFilesystemsFiles: createFiles: op(a): GET result: OK Jan 20 23:52:03.465462 ignition[1117]: INFO : files: createFilesystemsFiles: createFiles: op(a): [finished] writing file "/sysroot/opt/extensions/kubernetes/kubernetes-v1.33.0-arm64.raw" Jan 20 23:52:03.467768 ignition[1117]: INFO : files: op(b): [started] processing unit "prepare-helm.service" Jan 20 23:52:03.469610 ignition[1117]: INFO : files: op(b): op(c): [started] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 23:52:03.472582 ignition[1117]: INFO : files: op(b): op(c): [finished] writing unit "prepare-helm.service" at "/sysroot/etc/systemd/system/prepare-helm.service" Jan 20 23:52:03.472582 ignition[1117]: INFO : files: op(b): [finished] processing unit "prepare-helm.service" Jan 20 23:52:03.472582 ignition[1117]: INFO : files: op(d): [started] setting preset to enabled for "prepare-helm.service" Jan 20 23:52:03.472582 ignition[1117]: INFO : files: op(d): [finished] setting preset to enabled for "prepare-helm.service" Jan 20 23:52:03.472582 ignition[1117]: INFO : files: createResultFile: createFiles: op(e): [started] writing file "/sysroot/etc/.ignition-result.json" Jan 20 23:52:03.472582 ignition[1117]: INFO : files: createResultFile: createFiles: op(e): [finished] writing file "/sysroot/etc/.ignition-result.json" Jan 20 23:52:03.472582 ignition[1117]: INFO : files: files passed Jan 20 23:52:03.472582 ignition[1117]: INFO : Ignition finished successfully Jan 20 23:52:03.486802 kernel: audit: type=1130 audit(1768953123.475:40): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.475000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.474725 systemd[1]: Finished ignition-files.service - Ignition (files). Jan 20 23:52:03.477363 systemd[1]: Starting ignition-quench.service - Ignition (record completion)... Jan 20 23:52:03.481329 systemd[1]: Starting initrd-setup-root-after-ignition.service - Root filesystem completion... Jan 20 23:52:03.492888 systemd[1]: ignition-quench.service: Deactivated successfully. Jan 20 23:52:03.493040 systemd[1]: Finished ignition-quench.service - Ignition (record completion). Jan 20 23:52:03.500251 kernel: audit: type=1130 audit(1768953123.494:41): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.500279 kernel: audit: type=1131 audit(1768953123.494:42): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.494000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.494000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-quench comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.504468 initrd-setup-root-after-ignition[1149]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 23:52:03.504468 initrd-setup-root-after-ignition[1149]: grep: /sysroot/usr/share/flatcar/enabled-sysext.conf: No such file or directory Jan 20 23:52:03.507512 initrd-setup-root-after-ignition[1153]: grep: /sysroot/etc/flatcar/enabled-sysext.conf: No such file or directory Jan 20 23:52:03.508827 systemd[1]: Finished initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 23:52:03.509000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.510132 systemd[1]: Reached target ignition-complete.target - Ignition Complete. Jan 20 23:52:03.515012 kernel: audit: type=1130 audit(1768953123.509:43): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.515009 systemd[1]: Starting initrd-parse-etc.service - Mountpoints Configured in the Real Root... Jan 20 23:52:03.551168 systemd[1]: initrd-parse-etc.service: Deactivated successfully. Jan 20 23:52:03.551312 systemd[1]: Finished initrd-parse-etc.service - Mountpoints Configured in the Real Root. Jan 20 23:52:03.559036 kernel: audit: type=1130 audit(1768953123.553:44): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.559065 kernel: audit: type=1131 audit(1768953123.553:45): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.553000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.553000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-parse-etc comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.553825 systemd[1]: Reached target initrd-fs.target - Initrd File Systems. Jan 20 23:52:03.559912 systemd[1]: Reached target initrd.target - Initrd Default Target. Jan 20 23:52:03.561647 systemd[1]: dracut-mount.service - dracut mount hook was skipped because no trigger condition checks were met. Jan 20 23:52:03.562654 systemd[1]: Starting dracut-pre-pivot.service - dracut pre-pivot and cleanup hook... Jan 20 23:52:03.601544 systemd[1]: Finished dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 23:52:03.602000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.606270 systemd[1]: Starting initrd-cleanup.service - Cleaning Up and Shutting Down Daemons... Jan 20 23:52:03.608517 kernel: audit: type=1130 audit(1768953123.602:46): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.630428 systemd[1]: Unnecessary job was removed for dev-mapper-usr.device - /dev/mapper/usr. Jan 20 23:52:03.630605 systemd[1]: Stopped target nss-lookup.target - Host and Network Name Lookups. Jan 20 23:52:03.632543 systemd[1]: Stopped target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 23:52:03.634470 systemd[1]: Stopped target timers.target - Timer Units. Jan 20 23:52:03.636112 systemd[1]: dracut-pre-pivot.service: Deactivated successfully. Jan 20 23:52:03.637000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.640509 kernel: audit: type=1131 audit(1768953123.637:47): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-pivot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.636250 systemd[1]: Stopped dracut-pre-pivot.service - dracut pre-pivot and cleanup hook. Jan 20 23:52:03.640589 systemd[1]: Stopped target initrd.target - Initrd Default Target. Jan 20 23:52:03.642445 systemd[1]: Stopped target basic.target - Basic System. Jan 20 23:52:03.643968 systemd[1]: Stopped target ignition-complete.target - Ignition Complete. Jan 20 23:52:03.645528 systemd[1]: Stopped target ignition-diskful.target - Ignition Boot Disk Setup. Jan 20 23:52:03.647363 systemd[1]: Stopped target initrd-root-device.target - Initrd Root Device. Jan 20 23:52:03.649182 systemd[1]: Stopped target initrd-usr-fs.target - Initrd /usr File System. Jan 20 23:52:03.650982 systemd[1]: Stopped target remote-fs.target - Remote File Systems. Jan 20 23:52:03.652654 systemd[1]: Stopped target remote-fs-pre.target - Preparation for Remote File Systems. Jan 20 23:52:03.654461 systemd[1]: Stopped target sysinit.target - System Initialization. Jan 20 23:52:03.656331 systemd[1]: Stopped target local-fs.target - Local File Systems. Jan 20 23:52:03.657926 systemd[1]: Stopped target swap.target - Swaps. Jan 20 23:52:03.659223 systemd[1]: dracut-pre-mount.service: Deactivated successfully. Jan 20 23:52:03.660000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.659362 systemd[1]: Stopped dracut-pre-mount.service - dracut pre-mount hook. Jan 20 23:52:03.661489 systemd[1]: Stopped target cryptsetup.target - Local Encrypted Volumes. Jan 20 23:52:03.663358 systemd[1]: Stopped target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 23:52:03.665145 systemd[1]: clevis-luks-askpass.path: Deactivated successfully. Jan 20 23:52:03.666010 systemd[1]: Stopped clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 23:52:03.668000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-initqueue comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.667171 systemd[1]: dracut-initqueue.service: Deactivated successfully. Jan 20 23:52:03.667311 systemd[1]: Stopped dracut-initqueue.service - dracut initqueue hook. Jan 20 23:52:03.671000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root-after-ignition comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.669763 systemd[1]: initrd-setup-root-after-ignition.service: Deactivated successfully. Jan 20 23:52:03.673000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-files comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.669894 systemd[1]: Stopped initrd-setup-root-after-ignition.service - Root filesystem completion. Jan 20 23:52:03.671671 systemd[1]: ignition-files.service: Deactivated successfully. Jan 20 23:52:03.671784 systemd[1]: Stopped ignition-files.service - Ignition (files). Jan 20 23:52:03.674159 systemd[1]: Stopping ignition-mount.service - Ignition (mount)... Jan 20 23:52:03.677000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.675995 systemd[1]: kmod-static-nodes.service: Deactivated successfully. Jan 20 23:52:03.676126 systemd[1]: Stopped kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 23:52:03.681000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.678628 systemd[1]: Stopping sysroot-boot.service - /sysroot/boot... Jan 20 23:52:03.683000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.680028 systemd[1]: systemd-tmpfiles-setup.service: Deactivated successfully. Jan 20 23:52:03.685000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.680149 systemd[1]: Stopped systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 23:52:03.681880 systemd[1]: systemd-udev-trigger.service: Deactivated successfully. Jan 20 23:52:03.681990 systemd[1]: Stopped systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 23:52:03.683677 systemd[1]: dracut-pre-trigger.service: Deactivated successfully. Jan 20 23:52:03.683797 systemd[1]: Stopped dracut-pre-trigger.service - dracut pre-trigger hook. Jan 20 23:52:03.689719 systemd[1]: initrd-cleanup.service: Deactivated successfully. Jan 20 23:52:03.691490 systemd[1]: Finished initrd-cleanup.service - Cleaning Up and Shutting Down Daemons. Jan 20 23:52:03.693000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.693000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.700949 ignition[1173]: INFO : Ignition 2.24.0 Jan 20 23:52:03.700949 ignition[1173]: INFO : Stage: umount Jan 20 23:52:03.702750 ignition[1173]: INFO : no configs at "/usr/lib/ignition/base.d" Jan 20 23:52:03.702750 ignition[1173]: INFO : no config dir at "/usr/lib/ignition/base.platform.d/openstack" Jan 20 23:52:03.702750 ignition[1173]: INFO : umount: umount passed Jan 20 23:52:03.702750 ignition[1173]: INFO : Ignition finished successfully Jan 20 23:52:03.705000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-mount comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.703498 systemd[1]: sysroot-boot.mount: Deactivated successfully. Jan 20 23:52:03.709000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-disks comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.704108 systemd[1]: ignition-mount.service: Deactivated successfully. Jan 20 23:52:03.711000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-kargs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.704239 systemd[1]: Stopped ignition-mount.service - Ignition (mount). Jan 20 23:52:03.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.707631 systemd[1]: ignition-disks.service: Deactivated successfully. Jan 20 23:52:03.707696 systemd[1]: Stopped ignition-disks.service - Ignition (disks). Jan 20 23:52:03.716000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-fetch-offline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.709576 systemd[1]: ignition-kargs.service: Deactivated successfully. Jan 20 23:52:03.709625 systemd[1]: Stopped ignition-kargs.service - Ignition (kargs). Jan 20 23:52:03.712024 systemd[1]: ignition-fetch.service: Deactivated successfully. Jan 20 23:52:03.712082 systemd[1]: Stopped ignition-fetch.service - Ignition (fetch). Jan 20 23:52:03.713521 systemd[1]: Stopped target network.target - Network. Jan 20 23:52:03.715642 systemd[1]: ignition-fetch-offline.service: Deactivated successfully. Jan 20 23:52:03.715722 systemd[1]: Stopped ignition-fetch-offline.service - Ignition (fetch-offline). Jan 20 23:52:03.717371 systemd[1]: Stopped target paths.target - Path Units. Jan 20 23:52:03.719208 systemd[1]: systemd-ask-password-console.path: Deactivated successfully. Jan 20 23:52:03.723529 systemd[1]: Stopped systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 23:52:03.724588 systemd[1]: Stopped target slices.target - Slice Units. Jan 20 23:52:03.735000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.725936 systemd[1]: Stopped target sockets.target - Socket Units. Jan 20 23:52:03.737000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=ignition-setup-pre comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.728412 systemd[1]: iscsid.socket: Deactivated successfully. Jan 20 23:52:03.728485 systemd[1]: Closed iscsid.socket - Open-iSCSI iscsid Socket. Jan 20 23:52:03.730297 systemd[1]: iscsiuio.socket: Deactivated successfully. Jan 20 23:52:03.730331 systemd[1]: Closed iscsiuio.socket - Open-iSCSI iscsiuio Socket. Jan 20 23:52:03.732133 systemd[1]: systemd-journald-audit.socket: Deactivated successfully. Jan 20 23:52:03.732160 systemd[1]: Closed systemd-journald-audit.socket - Journal Audit Socket. Jan 20 23:52:03.733889 systemd[1]: ignition-setup.service: Deactivated successfully. Jan 20 23:52:03.733954 systemd[1]: Stopped ignition-setup.service - Ignition (setup). Jan 20 23:52:03.736102 systemd[1]: ignition-setup-pre.service: Deactivated successfully. Jan 20 23:52:03.736155 systemd[1]: Stopped ignition-setup-pre.service - Ignition env setup. Jan 20 23:52:03.737913 systemd[1]: Stopping systemd-networkd.service - Network Configuration... Jan 20 23:52:03.753000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.739809 systemd[1]: Stopping systemd-resolved.service - Network Name Resolution... Jan 20 23:52:03.755000 audit: BPF prog-id=6 op=UNLOAD Jan 20 23:52:03.750271 systemd[1]: systemd-resolved.service: Deactivated successfully. Jan 20 23:52:03.750392 systemd[1]: Stopped systemd-resolved.service - Network Name Resolution. Jan 20 23:52:03.761221 systemd[1]: systemd-networkd.service: Deactivated successfully. Jan 20 23:52:03.761383 systemd[1]: Stopped systemd-networkd.service - Network Configuration. Jan 20 23:52:03.764000 audit: BPF prog-id=9 op=UNLOAD Jan 20 23:52:03.764000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.766921 systemd[1]: Stopped target network-pre.target - Preparation for Network. Jan 20 23:52:03.768518 systemd[1]: systemd-networkd.socket: Deactivated successfully. Jan 20 23:52:03.768570 systemd[1]: Closed systemd-networkd.socket - Network Service Netlink Socket. Jan 20 23:52:03.771567 systemd[1]: Stopping network-cleanup.service - Network Cleanup... Jan 20 23:52:03.773491 systemd[1]: parse-ip-for-networkd.service: Deactivated successfully. Jan 20 23:52:03.774000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=parse-ip-for-networkd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.773561 systemd[1]: Stopped parse-ip-for-networkd.service - Write systemd-networkd units from cmdline. Jan 20 23:52:03.776000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.775196 systemd[1]: systemd-sysctl.service: Deactivated successfully. Jan 20 23:52:03.778000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.775241 systemd[1]: Stopped systemd-sysctl.service - Apply Kernel Variables. Jan 20 23:52:03.776824 systemd[1]: systemd-modules-load.service: Deactivated successfully. Jan 20 23:52:03.776871 systemd[1]: Stopped systemd-modules-load.service - Load Kernel Modules. Jan 20 23:52:03.778820 systemd[1]: Stopping systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 23:52:03.780709 systemd[1]: sysroot-boot.service: Deactivated successfully. Jan 20 23:52:03.786608 systemd[1]: Stopped sysroot-boot.service - /sysroot/boot. Jan 20 23:52:03.786000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=sysroot-boot comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.788048 systemd[1]: initrd-setup-root.service: Deactivated successfully. Jan 20 23:52:03.789000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-setup-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.788149 systemd[1]: Stopped initrd-setup-root.service - Root filesystem setup. Jan 20 23:52:03.793275 systemd[1]: systemd-udevd.service: Deactivated successfully. Jan 20 23:52:03.793478 systemd[1]: Stopped systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 23:52:03.796000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.798002 systemd[1]: systemd-udevd-control.socket: Deactivated successfully. Jan 20 23:52:03.798075 systemd[1]: Closed systemd-udevd-control.socket - udev Control Socket. Jan 20 23:52:03.799951 systemd[1]: systemd-udevd-kernel.socket: Deactivated successfully. Jan 20 23:52:03.799987 systemd[1]: Closed systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 23:52:03.802000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-pre-udev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.801564 systemd[1]: dracut-pre-udev.service: Deactivated successfully. Jan 20 23:52:03.801624 systemd[1]: Stopped dracut-pre-udev.service - dracut pre-udev hook. Jan 20 23:52:03.805000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.803999 systemd[1]: dracut-cmdline.service: Deactivated successfully. Jan 20 23:52:03.804058 systemd[1]: Stopped dracut-cmdline.service - dracut cmdline hook. Jan 20 23:52:03.807000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=dracut-cmdline-ask comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.806365 systemd[1]: dracut-cmdline-ask.service: Deactivated successfully. Jan 20 23:52:03.806427 systemd[1]: Stopped dracut-cmdline-ask.service - dracut ask for additional cmdline parameters. Jan 20 23:52:03.816168 systemd[1]: Starting initrd-udevadm-cleanup-db.service - Cleanup udev Database... Jan 20 23:52:03.817210 systemd[1]: systemd-network-generator.service: Deactivated successfully. Jan 20 23:52:03.818000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.817291 systemd[1]: Stopped systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 23:52:03.820000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.819512 systemd[1]: systemd-tmpfiles-setup-dev.service: Deactivated successfully. Jan 20 23:52:03.822000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=systemd-vconsole-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.819575 systemd[1]: Stopped systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 23:52:03.821434 systemd[1]: systemd-vconsole-setup.service: Deactivated successfully. Jan 20 23:52:03.825000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=network-cleanup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.821502 systemd[1]: Stopped systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 23:52:03.824091 systemd[1]: network-cleanup.service: Deactivated successfully. Jan 20 23:52:03.828000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.828000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=kernel msg='unit=initrd-udevadm-cleanup-db comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:03.825509 systemd[1]: Stopped network-cleanup.service - Network Cleanup. Jan 20 23:52:03.827195 systemd[1]: initrd-udevadm-cleanup-db.service: Deactivated successfully. Jan 20 23:52:03.827293 systemd[1]: Finished initrd-udevadm-cleanup-db.service - Cleanup udev Database. Jan 20 23:52:03.830001 systemd[1]: Reached target initrd-switch-root.target - Switch Root. Jan 20 23:52:03.832064 systemd[1]: Starting initrd-switch-root.service - Switch Root... Jan 20 23:52:03.862033 systemd[1]: Switching root. Jan 20 23:52:03.901590 systemd-journald[418]: Journal stopped Jan 20 23:52:04.953317 systemd-journald[418]: Received SIGTERM from PID 1 (systemd). Jan 20 23:52:04.953413 kernel: SELinux: policy capability network_peer_controls=1 Jan 20 23:52:04.953441 kernel: SELinux: policy capability open_perms=1 Jan 20 23:52:04.953468 kernel: SELinux: policy capability extended_socket_class=1 Jan 20 23:52:04.953496 kernel: SELinux: policy capability always_check_network=0 Jan 20 23:52:04.953517 kernel: SELinux: policy capability cgroup_seclabel=1 Jan 20 23:52:04.953528 kernel: SELinux: policy capability nnp_nosuid_transition=1 Jan 20 23:52:04.953542 kernel: SELinux: policy capability genfs_seclabel_symlinks=0 Jan 20 23:52:04.953553 kernel: SELinux: policy capability ioctl_skip_cloexec=0 Jan 20 23:52:04.953564 kernel: SELinux: policy capability userspace_initial_context=0 Jan 20 23:52:04.953575 systemd[1]: Successfully loaded SELinux policy in 70.394ms. Jan 20 23:52:04.953594 systemd[1]: Relabeled /dev/, /dev/shm/, /run/ in 6.239ms. Jan 20 23:52:04.953607 systemd[1]: systemd 257.9 running in system mode (+PAM +AUDIT +SELINUX -APPARMOR +IMA +IPE +SMACK +SECCOMP -GCRYPT -GNUTLS +OPENSSL -ACL +BLKID +CURL +ELFUTILS -FIDO2 +IDN2 -IDN +IPTC +KMOD +LIBCRYPTSETUP +LIBCRYPTSETUP_PLUGINS +LIBFDISK +PCRE2 -PWQUALITY -P11KIT -QRENCODE +TPM2 +BZIP2 +LZ4 +XZ +ZLIB +ZSTD -BPF_FRAMEWORK -BTF -XKBCOMMON +UTMP -SYSVINIT +LIBARCHIVE) Jan 20 23:52:04.953623 systemd[1]: Detected virtualization kvm. Jan 20 23:52:04.953635 systemd[1]: Detected architecture arm64. Jan 20 23:52:04.953646 systemd[1]: Detected first boot. Jan 20 23:52:04.953658 systemd[1]: Hostname set to . Jan 20 23:52:04.953669 systemd[1]: Initializing machine ID from SMBIOS/DMI UUID. Jan 20 23:52:04.953681 zram_generator::config[1218]: No configuration found. Jan 20 23:52:04.953700 kernel: NET: Registered PF_VSOCK protocol family Jan 20 23:52:04.953715 systemd[1]: Populated /etc with preset unit settings. Jan 20 23:52:04.953727 systemd[1]: initrd-switch-root.service: Deactivated successfully. Jan 20 23:52:04.953739 systemd[1]: Stopped initrd-switch-root.service - Switch Root. Jan 20 23:52:04.954226 systemd[1]: systemd-journald.service: Scheduled restart job, restart counter is at 1. Jan 20 23:52:04.954268 systemd[1]: Created slice system-addon\x2dconfig.slice - Slice /system/addon-config. Jan 20 23:52:04.954286 systemd[1]: Created slice system-addon\x2drun.slice - Slice /system/addon-run. Jan 20 23:52:04.954300 systemd[1]: Created slice system-getty.slice - Slice /system/getty. Jan 20 23:52:04.954312 systemd[1]: Created slice system-modprobe.slice - Slice /system/modprobe. Jan 20 23:52:04.954324 systemd[1]: Created slice system-serial\x2dgetty.slice - Slice /system/serial-getty. Jan 20 23:52:04.954336 systemd[1]: Created slice system-system\x2dcloudinit.slice - Slice /system/system-cloudinit. Jan 20 23:52:04.954347 systemd[1]: Created slice system-systemd\x2dfsck.slice - Slice /system/systemd-fsck. Jan 20 23:52:04.954358 systemd[1]: Created slice user.slice - User and Session Slice. Jan 20 23:52:04.954372 systemd[1]: Started clevis-luks-askpass.path - Forward Password Requests to Clevis Directory Watch. Jan 20 23:52:04.954389 systemd[1]: Started systemd-ask-password-console.path - Dispatch Password Requests to Console Directory Watch. Jan 20 23:52:04.954401 systemd[1]: Started systemd-ask-password-wall.path - Forward Password Requests to Wall Directory Watch. Jan 20 23:52:04.954413 systemd[1]: Set up automount boot.automount - Boot partition Automount Point. Jan 20 23:52:04.954425 systemd[1]: Set up automount proc-sys-fs-binfmt_misc.automount - Arbitrary Executable File Formats File System Automount Point. Jan 20 23:52:04.954436 systemd[1]: Expecting device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM... Jan 20 23:52:04.954447 systemd[1]: Expecting device dev-ttyAMA0.device - /dev/ttyAMA0... Jan 20 23:52:04.954479 systemd[1]: Reached target cryptsetup-pre.target - Local Encrypted Volumes (Pre). Jan 20 23:52:04.954510 systemd[1]: Reached target cryptsetup.target - Local Encrypted Volumes. Jan 20 23:52:04.954526 systemd[1]: Stopped target initrd-switch-root.target - Switch Root. Jan 20 23:52:04.954539 systemd[1]: Stopped target initrd-fs.target - Initrd File Systems. Jan 20 23:52:04.954550 systemd[1]: Stopped target initrd-root-fs.target - Initrd Root File System. Jan 20 23:52:04.954564 systemd[1]: Reached target integritysetup.target - Local Integrity Protected Volumes. Jan 20 23:52:04.954579 systemd[1]: Reached target remote-cryptsetup.target - Remote Encrypted Volumes. Jan 20 23:52:04.954591 systemd[1]: Reached target remote-fs.target - Remote File Systems. Jan 20 23:52:04.954602 systemd[1]: Reached target remote-veritysetup.target - Remote Verity Protected Volumes. Jan 20 23:52:04.954613 systemd[1]: Reached target slices.target - Slice Units. Jan 20 23:52:04.954625 systemd[1]: Reached target swap.target - Swaps. Jan 20 23:52:04.954641 systemd[1]: Reached target veritysetup.target - Local Verity Protected Volumes. Jan 20 23:52:04.954653 systemd[1]: Listening on systemd-coredump.socket - Process Core Dump Socket. Jan 20 23:52:04.954667 systemd[1]: Listening on systemd-creds.socket - Credential Encryption/Decryption. Jan 20 23:52:04.954678 systemd[1]: Listening on systemd-journald-audit.socket - Journal Audit Socket. Jan 20 23:52:04.954690 systemd[1]: Listening on systemd-mountfsd.socket - DDI File System Mounter Socket. Jan 20 23:52:04.954701 systemd[1]: Listening on systemd-networkd.socket - Network Service Netlink Socket. Jan 20 23:52:04.954713 systemd[1]: Listening on systemd-nsresourced.socket - Namespace Resource Manager Socket. Jan 20 23:52:04.954725 systemd[1]: Listening on systemd-oomd.socket - Userspace Out-Of-Memory (OOM) Killer Socket. Jan 20 23:52:04.954736 systemd[1]: Listening on systemd-udevd-control.socket - udev Control Socket. Jan 20 23:52:04.954749 systemd[1]: Listening on systemd-udevd-kernel.socket - udev Kernel Socket. Jan 20 23:52:04.954761 systemd[1]: Listening on systemd-userdbd.socket - User Database Manager Socket. Jan 20 23:52:04.954773 systemd[1]: Mounting dev-hugepages.mount - Huge Pages File System... Jan 20 23:52:04.954785 systemd[1]: Mounting dev-mqueue.mount - POSIX Message Queue File System... Jan 20 23:52:04.954796 systemd[1]: Mounting media.mount - External Media Directory... Jan 20 23:52:04.954807 systemd[1]: Mounting sys-kernel-debug.mount - Kernel Debug File System... Jan 20 23:52:04.954820 systemd[1]: Mounting sys-kernel-tracing.mount - Kernel Trace File System... Jan 20 23:52:04.954834 systemd[1]: Mounting tmp.mount - Temporary Directory /tmp... Jan 20 23:52:04.954846 systemd[1]: var-lib-machines.mount - Virtual Machine and Container Storage (Compatibility) was skipped because of an unmet condition check (ConditionPathExists=/var/lib/machines.raw). Jan 20 23:52:04.954858 systemd[1]: Reached target machines.target - Containers. Jan 20 23:52:04.954869 systemd[1]: Starting flatcar-tmpfiles.service - Create missing system files... Jan 20 23:52:04.954881 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 23:52:04.954892 systemd[1]: Starting kmod-static-nodes.service - Create List of Static Device Nodes... Jan 20 23:52:04.954905 systemd[1]: Starting modprobe@configfs.service - Load Kernel Module configfs... Jan 20 23:52:04.954920 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 23:52:04.954932 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 23:52:04.954943 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 23:52:04.954956 systemd[1]: Starting modprobe@fuse.service - Load Kernel Module fuse... Jan 20 23:52:04.954968 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 23:52:04.954981 systemd[1]: setup-nsswitch.service - Create /etc/nsswitch.conf was skipped because of an unmet condition check (ConditionPathExists=!/etc/nsswitch.conf). Jan 20 23:52:04.954992 systemd[1]: systemd-fsck-root.service: Deactivated successfully. Jan 20 23:52:04.955004 systemd[1]: Stopped systemd-fsck-root.service - File System Check on Root Device. Jan 20 23:52:04.955015 systemd[1]: systemd-fsck-usr.service: Deactivated successfully. Jan 20 23:52:04.955026 systemd[1]: Stopped systemd-fsck-usr.service. Jan 20 23:52:04.955040 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 23:52:04.955052 systemd[1]: Starting systemd-journald.service - Journal Service... Jan 20 23:52:04.955064 kernel: fuse: init (API version 7.41) Jan 20 23:52:04.955076 systemd[1]: Starting systemd-modules-load.service - Load Kernel Modules... Jan 20 23:52:04.955091 systemd[1]: Starting systemd-network-generator.service - Generate network units from Kernel command line... Jan 20 23:52:04.955104 systemd[1]: Starting systemd-remount-fs.service - Remount Root and Kernel File Systems... Jan 20 23:52:04.955116 systemd[1]: Starting systemd-udev-load-credentials.service - Load udev Rules from Credentials... Jan 20 23:52:04.955129 kernel: ACPI: bus type drm_connector registered Jan 20 23:52:04.955140 systemd[1]: Starting systemd-udev-trigger.service - Coldplug All udev Devices... Jan 20 23:52:04.955152 systemd[1]: Mounted dev-hugepages.mount - Huge Pages File System. Jan 20 23:52:04.955164 systemd[1]: Mounted dev-mqueue.mount - POSIX Message Queue File System. Jan 20 23:52:04.955176 systemd[1]: Mounted media.mount - External Media Directory. Jan 20 23:52:04.955187 systemd[1]: Mounted sys-kernel-debug.mount - Kernel Debug File System. Jan 20 23:52:04.955199 systemd[1]: Mounted sys-kernel-tracing.mount - Kernel Trace File System. Jan 20 23:52:04.955216 systemd[1]: Mounted tmp.mount - Temporary Directory /tmp. Jan 20 23:52:04.955228 systemd[1]: Finished kmod-static-nodes.service - Create List of Static Device Nodes. Jan 20 23:52:04.955272 systemd-journald[1287]: Collecting audit messages is enabled. Jan 20 23:52:04.955297 systemd[1]: modprobe@configfs.service: Deactivated successfully. Jan 20 23:52:04.955310 systemd-journald[1287]: Journal started Jan 20 23:52:04.955333 systemd-journald[1287]: Runtime Journal (/run/log/journal/16425fea29554372b76ac9a5ba5ba762) is 8M, max 319.5M, 311.5M free. Jan 20 23:52:04.800000 audit[1]: EVENT_LISTENER pid=1 uid=0 auid=4294967295 tty=(none) ses=4294967295 subj=system_u:system_r:kernel_t:s0 comm="systemd" exe="/usr/lib/systemd/systemd" nl-mcgrp=1 op=connect res=1 Jan 20 23:52:04.898000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-root comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:04.900000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck-usr comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:04.903000 audit: BPF prog-id=14 op=UNLOAD Jan 20 23:52:04.903000 audit: BPF prog-id=13 op=UNLOAD Jan 20 23:52:04.903000 audit: BPF prog-id=15 op=LOAD Jan 20 23:52:04.904000 audit: BPF prog-id=16 op=LOAD Jan 20 23:52:04.904000 audit: BPF prog-id=17 op=LOAD Jan 20 23:52:04.950000 audit: CONFIG_CHANGE op=set audit_enabled=1 old=1 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 res=1 Jan 20 23:52:04.950000 audit[1287]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=60 a0=3 a1=ffffd313cf20 a2=4000 a3=0 items=0 ppid=1 pid=1287 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="systemd-journal" exe="/usr/lib/systemd/systemd-journald" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:04.950000 audit: PROCTITLE proctitle="/usr/lib/systemd/systemd-journald" Jan 20 23:52:04.953000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kmod-static-nodes comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:04.707942 systemd[1]: Queued start job for default target multi-user.target. Jan 20 23:52:04.731579 systemd[1]: Unnecessary job was removed for dev-vda6.device - /dev/vda6. Jan 20 23:52:04.732047 systemd[1]: systemd-journald.service: Deactivated successfully. Jan 20 23:52:04.957491 systemd[1]: Finished modprobe@configfs.service - Load Kernel Module configfs. Jan 20 23:52:04.958000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:04.958000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@configfs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:04.963493 systemd[1]: Started systemd-journald.service - Journal Service. Jan 20 23:52:04.964445 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 23:52:04.964677 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 23:52:04.963000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journald comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:04.965000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:04.966169 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 23:52:04.965000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:04.966384 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 23:52:04.967000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:04.967000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:04.967852 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 23:52:04.968051 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 23:52:04.968000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:04.968000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:04.969763 systemd[1]: modprobe@fuse.service: Deactivated successfully. Jan 20 23:52:04.969941 systemd[1]: Finished modprobe@fuse.service - Load Kernel Module fuse. Jan 20 23:52:04.970000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:04.970000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@fuse comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:04.971398 systemd[1]: Finished flatcar-tmpfiles.service - Create missing system files. Jan 20 23:52:04.972000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=flatcar-tmpfiles comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:04.972753 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 23:52:04.972935 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 23:52:04.975000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:04.975000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:04.975933 systemd[1]: Finished systemd-modules-load.service - Load Kernel Modules. Jan 20 23:52:04.976000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-modules-load comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:04.977386 systemd[1]: Finished systemd-network-generator.service - Generate network units from Kernel command line. Jan 20 23:52:04.978000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-network-generator comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:04.979709 systemd[1]: Finished systemd-remount-fs.service - Remount Root and Kernel File Systems. Jan 20 23:52:04.980000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-remount-fs comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:04.981472 systemd[1]: Finished systemd-udev-load-credentials.service - Load udev Rules from Credentials. Jan 20 23:52:04.982000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-load-credentials comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:04.994502 systemd[1]: Reached target network-pre.target - Preparation for Network. Jan 20 23:52:04.996603 systemd[1]: Listening on systemd-importd.socket - Disk Image Download Service Socket. Jan 20 23:52:04.998750 systemd[1]: Mounting sys-fs-fuse-connections.mount - FUSE Control File System... Jan 20 23:52:05.000806 systemd[1]: Mounting sys-kernel-config.mount - Kernel Configuration File System... Jan 20 23:52:05.001820 systemd[1]: remount-root.service - Remount Root File System was skipped because of an unmet condition check (ConditionPathIsReadWrite=!/). Jan 20 23:52:05.001851 systemd[1]: Reached target local-fs.target - Local File Systems. Jan 20 23:52:05.003597 systemd[1]: Listening on systemd-sysext.socket - System Extension Image Management. Jan 20 23:52:05.004873 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 23:52:05.004987 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 23:52:05.011638 systemd[1]: Starting systemd-hwdb-update.service - Rebuild Hardware Database... Jan 20 23:52:05.013877 systemd[1]: Starting systemd-journal-flush.service - Flush Journal to Persistent Storage... Jan 20 23:52:05.015018 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 23:52:05.016302 systemd[1]: Starting systemd-random-seed.service - Load/Save OS Random Seed... Jan 20 23:52:05.017476 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 23:52:05.021651 systemd[1]: Starting systemd-sysctl.service - Apply Kernel Variables... Jan 20 23:52:05.027743 systemd[1]: Starting systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/... Jan 20 23:52:05.030751 systemd[1]: Starting systemd-sysusers.service - Create System Users... Jan 20 23:52:05.033337 systemd[1]: Mounted sys-fs-fuse-connections.mount - FUSE Control File System. Jan 20 23:52:05.034743 systemd[1]: Mounted sys-kernel-config.mount - Kernel Configuration File System. Jan 20 23:52:05.037975 systemd-journald[1287]: Time spent on flushing to /var/log/journal/16425fea29554372b76ac9a5ba5ba762 is 24.419ms for 1815 entries. Jan 20 23:52:05.037975 systemd-journald[1287]: System Journal (/var/log/journal/16425fea29554372b76ac9a5ba5ba762) is 8M, max 588.1M, 580.1M free. Jan 20 23:52:05.077885 systemd-journald[1287]: Received client request to flush runtime journal. Jan 20 23:52:05.077988 kernel: loop1: detected capacity change from 0 to 1648 Jan 20 23:52:05.050000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-random-seed comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:05.056000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udev-trigger comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:05.059000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysctl comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:05.048974 systemd[1]: Finished systemd-random-seed.service - Load/Save OS Random Seed. Jan 20 23:52:05.050953 systemd[1]: Reached target first-boot-complete.target - First Boot Complete. Jan 20 23:52:05.053612 systemd[1]: Starting systemd-machine-id-commit.service - Save Transient machine-id to Disk... Jan 20 23:52:05.055038 systemd[1]: Finished systemd-udev-trigger.service - Coldplug All udev Devices. Jan 20 23:52:05.058602 systemd[1]: Finished systemd-sysctl.service - Apply Kernel Variables. Jan 20 23:52:05.081501 systemd[1]: Finished systemd-journal-flush.service - Flush Journal to Persistent Storage. Jan 20 23:52:05.082000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-flush comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:05.093534 systemd[1]: Finished systemd-machine-id-commit.service - Save Transient machine-id to Disk. Jan 20 23:52:05.094000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-machine-id-commit comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:05.097492 kernel: loop2: detected capacity change from 0 to 45344 Jan 20 23:52:05.100710 systemd[1]: Finished systemd-sysusers.service - Create System Users. Jan 20 23:52:05.101000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysusers comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:05.102000 audit: BPF prog-id=18 op=LOAD Jan 20 23:52:05.102000 audit: BPF prog-id=19 op=LOAD Jan 20 23:52:05.102000 audit: BPF prog-id=20 op=LOAD Jan 20 23:52:05.104182 systemd[1]: Starting systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer... Jan 20 23:52:05.105000 audit: BPF prog-id=21 op=LOAD Jan 20 23:52:05.107012 systemd[1]: Starting systemd-resolved.service - Network Name Resolution... Jan 20 23:52:05.111623 systemd[1]: Starting systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev... Jan 20 23:52:05.113000 audit: BPF prog-id=22 op=LOAD Jan 20 23:52:05.114000 audit: BPF prog-id=23 op=LOAD Jan 20 23:52:05.114000 audit: BPF prog-id=24 op=LOAD Jan 20 23:52:05.115411 systemd[1]: Starting systemd-nsresourced.service - Namespace Resource Manager... Jan 20 23:52:05.138000 audit: BPF prog-id=25 op=LOAD Jan 20 23:52:05.138000 audit: BPF prog-id=26 op=LOAD Jan 20 23:52:05.138000 audit: BPF prog-id=27 op=LOAD Jan 20 23:52:05.141648 systemd[1]: Starting systemd-userdbd.service - User Database Manager... Jan 20 23:52:05.151496 kernel: loop3: detected capacity change from 0 to 100192 Jan 20 23:52:05.164359 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Jan 20 23:52:05.164384 systemd-tmpfiles[1358]: ACLs are not supported, ignoring. Jan 20 23:52:05.171534 systemd[1]: Finished systemd-tmpfiles-setup-dev.service - Create Static Device Nodes in /dev. Jan 20 23:52:05.172532 systemd-nsresourced[1359]: Not setting up BPF subsystem, as functionality has been disabled at compile time. Jan 20 23:52:05.172000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup-dev comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:05.174579 systemd[1]: Started systemd-nsresourced.service - Namespace Resource Manager. Jan 20 23:52:05.175000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-nsresourced comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:05.190652 systemd[1]: Started systemd-userdbd.service - User Database Manager. Jan 20 23:52:05.191000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-userdbd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:05.212483 kernel: loop4: detected capacity change from 0 to 211168 Jan 20 23:52:05.245691 systemd-oomd[1356]: No swap; memory pressure usage will be degraded Jan 20 23:52:05.246385 systemd[1]: Started systemd-oomd.service - Userspace Out-Of-Memory (OOM) Killer. Jan 20 23:52:05.247000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-oomd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:05.254416 systemd-resolved[1357]: Positive Trust Anchors: Jan 20 23:52:05.254771 systemd-resolved[1357]: . IN DS 20326 8 2 e06d44b80b8f1d39a95c0b0d7c65d08458e880409bbc683457104237c7f8ec8d Jan 20 23:52:05.254779 systemd-resolved[1357]: . IN DS 38696 8 2 683d2d0acb8c9b712a1948b27f741219298d0a450d612c483af444a4c0fb2b16 Jan 20 23:52:05.254810 systemd-resolved[1357]: Negative trust anchors: home.arpa 10.in-addr.arpa 16.172.in-addr.arpa 17.172.in-addr.arpa 18.172.in-addr.arpa 19.172.in-addr.arpa 20.172.in-addr.arpa 21.172.in-addr.arpa 22.172.in-addr.arpa 23.172.in-addr.arpa 24.172.in-addr.arpa 25.172.in-addr.arpa 26.172.in-addr.arpa 27.172.in-addr.arpa 28.172.in-addr.arpa 29.172.in-addr.arpa 30.172.in-addr.arpa 31.172.in-addr.arpa 170.0.0.192.in-addr.arpa 171.0.0.192.in-addr.arpa 168.192.in-addr.arpa d.f.ip6.arpa ipv4only.arpa resolver.arpa corp home internal intranet lan local private test Jan 20 23:52:05.259510 kernel: loop5: detected capacity change from 0 to 1648 Jan 20 23:52:05.264529 kernel: loop6: detected capacity change from 0 to 45344 Jan 20 23:52:05.264612 systemd-resolved[1357]: Using system hostname 'ci-4547-0-0-n-e5b472a427'. Jan 20 23:52:05.266107 systemd[1]: Started systemd-resolved.service - Network Name Resolution. Jan 20 23:52:05.266000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-resolved comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:05.267549 systemd[1]: Reached target nss-lookup.target - Host and Network Name Lookups. Jan 20 23:52:05.282491 kernel: loop7: detected capacity change from 0 to 100192 Jan 20 23:52:05.300480 kernel: loop1: detected capacity change from 0 to 211168 Jan 20 23:52:05.316613 (sd-merge)[1381]: Using extensions 'containerd-flatcar.raw', 'docker-flatcar.raw', 'kubernetes.raw', 'oem-stackit.raw'. Jan 20 23:52:05.319768 (sd-merge)[1381]: Merged extensions into '/usr'. Jan 20 23:52:05.324118 systemd[1]: Reload requested from client PID 1338 ('systemd-sysext') (unit systemd-sysext.service)... Jan 20 23:52:05.324134 systemd[1]: Reloading... Jan 20 23:52:05.379485 zram_generator::config[1411]: No configuration found. Jan 20 23:52:05.535852 systemd[1]: etc-machine\x2did.mount: Deactivated successfully. Jan 20 23:52:05.536077 systemd[1]: Reloading finished in 211 ms. Jan 20 23:52:05.572335 systemd[1]: Finished systemd-hwdb-update.service - Rebuild Hardware Database. Jan 20 23:52:05.574000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-hwdb-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:05.575485 systemd[1]: Finished systemd-sysext.service - Merge System Extension Images into /usr/ and /opt/. Jan 20 23:52:05.576000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:05.589821 systemd[1]: Starting ensure-sysext.service... Jan 20 23:52:05.591710 systemd[1]: Starting systemd-tmpfiles-setup.service - Create System Files and Directories... Jan 20 23:52:05.592000 audit: BPF prog-id=8 op=UNLOAD Jan 20 23:52:05.592000 audit: BPF prog-id=7 op=UNLOAD Jan 20 23:52:05.593000 audit: BPF prog-id=28 op=LOAD Jan 20 23:52:05.593000 audit: BPF prog-id=29 op=LOAD Jan 20 23:52:05.594317 systemd[1]: Starting systemd-udevd.service - Rule-based Manager for Device Events and Files... Jan 20 23:52:05.596000 audit: BPF prog-id=30 op=LOAD Jan 20 23:52:05.596000 audit: BPF prog-id=25 op=UNLOAD Jan 20 23:52:05.596000 audit: BPF prog-id=31 op=LOAD Jan 20 23:52:05.596000 audit: BPF prog-id=32 op=LOAD Jan 20 23:52:05.596000 audit: BPF prog-id=26 op=UNLOAD Jan 20 23:52:05.596000 audit: BPF prog-id=27 op=UNLOAD Jan 20 23:52:05.596000 audit: BPF prog-id=33 op=LOAD Jan 20 23:52:05.596000 audit: BPF prog-id=22 op=UNLOAD Jan 20 23:52:05.596000 audit: BPF prog-id=34 op=LOAD Jan 20 23:52:05.596000 audit: BPF prog-id=35 op=LOAD Jan 20 23:52:05.596000 audit: BPF prog-id=23 op=UNLOAD Jan 20 23:52:05.596000 audit: BPF prog-id=24 op=UNLOAD Jan 20 23:52:05.597000 audit: BPF prog-id=36 op=LOAD Jan 20 23:52:05.597000 audit: BPF prog-id=21 op=UNLOAD Jan 20 23:52:05.597000 audit: BPF prog-id=37 op=LOAD Jan 20 23:52:05.597000 audit: BPF prog-id=18 op=UNLOAD Jan 20 23:52:05.598000 audit: BPF prog-id=38 op=LOAD Jan 20 23:52:05.598000 audit: BPF prog-id=39 op=LOAD Jan 20 23:52:05.598000 audit: BPF prog-id=19 op=UNLOAD Jan 20 23:52:05.598000 audit: BPF prog-id=20 op=UNLOAD Jan 20 23:52:05.599000 audit: BPF prog-id=40 op=LOAD Jan 20 23:52:05.599000 audit: BPF prog-id=15 op=UNLOAD Jan 20 23:52:05.599000 audit: BPF prog-id=41 op=LOAD Jan 20 23:52:05.599000 audit: BPF prog-id=42 op=LOAD Jan 20 23:52:05.599000 audit: BPF prog-id=16 op=UNLOAD Jan 20 23:52:05.599000 audit: BPF prog-id=17 op=UNLOAD Jan 20 23:52:05.604638 systemd[1]: Reload requested from client PID 1448 ('systemctl') (unit ensure-sysext.service)... Jan 20 23:52:05.604659 systemd[1]: Reloading... Jan 20 23:52:05.608344 systemd-tmpfiles[1449]: /usr/lib/tmpfiles.d/nfs-utils.conf:6: Duplicate line for path "/var/lib/nfs/sm", ignoring. Jan 20 23:52:05.608408 systemd-tmpfiles[1449]: /usr/lib/tmpfiles.d/nfs-utils.conf:7: Duplicate line for path "/var/lib/nfs/sm.bak", ignoring. Jan 20 23:52:05.608841 systemd-tmpfiles[1449]: /usr/lib/tmpfiles.d/provision.conf:20: Duplicate line for path "/root", ignoring. Jan 20 23:52:05.609803 systemd-tmpfiles[1449]: ACLs are not supported, ignoring. Jan 20 23:52:05.609867 systemd-tmpfiles[1449]: ACLs are not supported, ignoring. Jan 20 23:52:05.617112 systemd-tmpfiles[1449]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 23:52:05.617129 systemd-tmpfiles[1449]: Skipping /boot Jan 20 23:52:05.620929 systemd-udevd[1450]: Using default interface naming scheme 'v257'. Jan 20 23:52:05.626179 systemd-tmpfiles[1449]: Detected autofs mount point /boot during canonicalization of boot. Jan 20 23:52:05.626201 systemd-tmpfiles[1449]: Skipping /boot Jan 20 23:52:05.668491 zram_generator::config[1481]: No configuration found. Jan 20 23:52:05.794495 kernel: mousedev: PS/2 mouse device common for all mice Jan 20 23:52:05.838847 kernel: [drm] pci: virtio-gpu-pci detected at 0000:06:00.0 Jan 20 23:52:05.838960 kernel: [drm] features: -virgl +edid -resource_blob -host_visible Jan 20 23:52:05.838978 kernel: [drm] features: -context_init Jan 20 23:52:05.844921 kernel: [drm] number of scanouts: 1 Jan 20 23:52:05.845023 kernel: [drm] number of cap sets: 0 Jan 20 23:52:05.865831 systemd[1]: Condition check resulted in dev-ttyAMA0.device - /dev/ttyAMA0 being skipped. Jan 20 23:52:05.866157 systemd[1]: Found device dev-disk-by\x2dlabel-OEM.device - /dev/disk/by-label/OEM. Jan 20 23:52:05.867490 kernel: [drm] Initialized virtio_gpu 0.1.0 for 0000:06:00.0 on minor 0 Jan 20 23:52:05.868018 systemd[1]: Reloading finished in 263 ms. Jan 20 23:52:05.882715 systemd[1]: Started systemd-udevd.service - Rule-based Manager for Device Events and Files. Jan 20 23:52:05.884000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-udevd comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:05.890500 kernel: Console: switching to colour frame buffer device 160x50 Jan 20 23:52:05.892487 kernel: virtio-pci 0000:06:00.0: [drm] fb0: virtio_gpudrmfb frame buffer device Jan 20 23:52:05.893590 systemd[1]: Finished systemd-tmpfiles-setup.service - Create System Files and Directories. Jan 20 23:52:05.906000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-tmpfiles-setup comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:05.908000 audit: BPF prog-id=43 op=LOAD Jan 20 23:52:05.908000 audit: BPF prog-id=33 op=UNLOAD Jan 20 23:52:05.909000 audit: BPF prog-id=44 op=LOAD Jan 20 23:52:05.909000 audit: BPF prog-id=45 op=LOAD Jan 20 23:52:05.909000 audit: BPF prog-id=34 op=UNLOAD Jan 20 23:52:05.909000 audit: BPF prog-id=35 op=UNLOAD Jan 20 23:52:05.911000 audit: BPF prog-id=46 op=LOAD Jan 20 23:52:05.911000 audit: BPF prog-id=36 op=UNLOAD Jan 20 23:52:05.913000 audit: BPF prog-id=47 op=LOAD Jan 20 23:52:05.913000 audit: BPF prog-id=37 op=UNLOAD Jan 20 23:52:05.913000 audit: BPF prog-id=48 op=LOAD Jan 20 23:52:05.913000 audit: BPF prog-id=49 op=LOAD Jan 20 23:52:05.913000 audit: BPF prog-id=38 op=UNLOAD Jan 20 23:52:05.913000 audit: BPF prog-id=39 op=UNLOAD Jan 20 23:52:05.913000 audit: BPF prog-id=50 op=LOAD Jan 20 23:52:05.913000 audit: BPF prog-id=51 op=LOAD Jan 20 23:52:05.913000 audit: BPF prog-id=28 op=UNLOAD Jan 20 23:52:05.913000 audit: BPF prog-id=29 op=UNLOAD Jan 20 23:52:05.914000 audit: BPF prog-id=52 op=LOAD Jan 20 23:52:05.914000 audit: BPF prog-id=40 op=UNLOAD Jan 20 23:52:05.914000 audit: BPF prog-id=53 op=LOAD Jan 20 23:52:05.914000 audit: BPF prog-id=54 op=LOAD Jan 20 23:52:05.914000 audit: BPF prog-id=41 op=UNLOAD Jan 20 23:52:05.914000 audit: BPF prog-id=42 op=UNLOAD Jan 20 23:52:05.914000 audit: BPF prog-id=55 op=LOAD Jan 20 23:52:05.914000 audit: BPF prog-id=30 op=UNLOAD Jan 20 23:52:05.914000 audit: BPF prog-id=56 op=LOAD Jan 20 23:52:05.915000 audit: BPF prog-id=57 op=LOAD Jan 20 23:52:05.915000 audit: BPF prog-id=31 op=UNLOAD Jan 20 23:52:05.915000 audit: BPF prog-id=32 op=UNLOAD Jan 20 23:52:05.955492 systemd[1]: Finished ensure-sysext.service. Jan 20 23:52:05.955000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=ensure-sysext comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:05.962302 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 23:52:05.964522 systemd[1]: Starting clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs... Jan 20 23:52:05.965708 systemd[1]: ignition-delete-config.service - Ignition (delete config) was skipped because no trigger condition checks were met. Jan 20 23:52:05.966840 systemd[1]: Starting modprobe@dm_mod.service - Load Kernel Module dm_mod... Jan 20 23:52:05.976590 systemd[1]: Starting modprobe@drm.service - Load Kernel Module drm... Jan 20 23:52:05.978635 systemd[1]: Starting modprobe@efi_pstore.service - Load Kernel Module efi_pstore... Jan 20 23:52:05.980889 systemd[1]: Starting modprobe@loop.service - Load Kernel Module loop... Jan 20 23:52:05.988629 systemd[1]: Starting modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm... Jan 20 23:52:05.989899 systemd[1]: systemd-binfmt.service - Set Up Additional Binary Formats was skipped because no trigger condition checks were met. Jan 20 23:52:05.990012 systemd[1]: systemd-confext.service - Merge System Configuration Images into /etc/ was skipped because no trigger condition checks were met. Jan 20 23:52:05.991513 systemd[1]: Starting ldconfig.service - Rebuild Dynamic Linker Cache... Jan 20 23:52:05.993731 systemd[1]: Starting systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM... Jan 20 23:52:05.995317 systemd[1]: systemd-hibernate-clear.service - Clear Stale Hibernate Storage Info was skipped because of an unmet condition check (ConditionPathExists=/sys/firmware/efi/efivars/HibernateLocation-8cf2644b-4b0b-428f-9387-6d876050dc67). Jan 20 23:52:05.996884 systemd[1]: Starting systemd-journal-catalog-update.service - Rebuild Journal Catalog... Jan 20 23:52:06.000422 kernel: pps_core: LinuxPPS API ver. 1 registered Jan 20 23:52:06.000526 kernel: pps_core: Software ver. 5.3.6 - Copyright 2005-2007 Rodolfo Giometti Jan 20 23:52:06.003848 systemd[1]: Starting systemd-networkd.service - Network Configuration... Jan 20 23:52:06.002000 audit: BPF prog-id=58 op=LOAD Jan 20 23:52:06.004495 kernel: PTP clock support registered Jan 20 23:52:06.005308 systemd[1]: Reached target time-set.target - System Time Set. Jan 20 23:52:06.008638 systemd[1]: Starting systemd-update-utmp.service - Record System Boot/Shutdown in UTMP... Jan 20 23:52:06.011083 systemd[1]: Starting systemd-vconsole-setup.service - Virtual Console Setup... Jan 20 23:52:06.017681 systemd[1]: modprobe@dm_mod.service: Deactivated successfully. Jan 20 23:52:06.025514 systemd[1]: Finished modprobe@dm_mod.service - Load Kernel Module dm_mod. Jan 20 23:52:06.026000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:06.026000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@dm_mod comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:06.027629 systemd[1]: modprobe@drm.service: Deactivated successfully. Jan 20 23:52:06.027844 systemd[1]: Finished modprobe@drm.service - Load Kernel Module drm. Jan 20 23:52:06.029000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:06.029000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@drm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:06.030000 audit[1587]: SYSTEM_BOOT pid=1587 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg=' comm="systemd-update-utmp" exe="/usr/lib/systemd/systemd-update-utmp" hostname=? addr=? terminal=? res=success' Jan 20 23:52:06.031111 systemd[1]: modprobe@efi_pstore.service: Deactivated successfully. Jan 20 23:52:06.031308 systemd[1]: Finished modprobe@efi_pstore.service - Load Kernel Module efi_pstore. Jan 20 23:52:06.032000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:06.032000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@efi_pstore comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:06.033127 systemd[1]: modprobe@loop.service: Deactivated successfully. Jan 20 23:52:06.034505 systemd[1]: Finished modprobe@loop.service - Load Kernel Module loop. Jan 20 23:52:06.034000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:06.034000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@loop comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:06.035710 systemd[1]: modprobe@ptp_kvm.service: Deactivated successfully. Jan 20 23:52:06.035941 systemd[1]: Finished modprobe@ptp_kvm.service - Load Kernel Module ptp_kvm. Jan 20 23:52:06.038000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@ptp_kvm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:06.038000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=modprobe@ptp_kvm comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:06.040534 systemd[1]: Finished systemd-fsck@dev-disk-by\x2dlabel-OEM.service - File System Check on /dev/disk/by-label/OEM. Jan 20 23:52:06.041000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-fsck@dev-disk-by\x2dlabel-OEM comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:06.049646 systemd[1]: systemd-pstore.service - Platform Persistent Storage Archival was skipped because of an unmet condition check (ConditionDirectoryNotEmpty=/sys/fs/pstore). Jan 20 23:52:06.049804 systemd[1]: systemd-repart.service - Repartition Root Disk was skipped because no trigger condition checks were met. Jan 20 23:52:06.054773 systemd[1]: Finished systemd-journal-catalog-update.service - Rebuild Journal Catalog. Jan 20 23:52:06.055000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-journal-catalog-update comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:06.059502 systemd[1]: Finished systemd-update-utmp.service - Record System Boot/Shutdown in UTMP. Jan 20 23:52:06.060000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=systemd-update-utmp comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:06.066000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=add_rule key=(null) list=5 res=1 Jan 20 23:52:06.066000 audit[1615]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffe35e9770 a2=420 a3=0 items=0 ppid=1570 pid=1615 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:06.066000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 23:52:06.066828 augenrules[1615]: No rules Jan 20 23:52:06.067952 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 23:52:06.068269 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 23:52:06.112913 systemd-networkd[1586]: lo: Link UP Jan 20 23:52:06.112927 systemd-networkd[1586]: lo: Gained carrier Jan 20 23:52:06.114080 systemd-networkd[1586]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 23:52:06.114090 systemd-networkd[1586]: eth0: Configuring with /usr/lib/systemd/network/zz-default.network. Jan 20 23:52:06.114694 systemd[1]: Started systemd-networkd.service - Network Configuration. Jan 20 23:52:06.115216 systemd-networkd[1586]: eth0: Link UP Jan 20 23:52:06.115349 systemd-networkd[1586]: eth0: Gained carrier Jan 20 23:52:06.115362 systemd-networkd[1586]: eth0: Found matching .network file, based on potentially unpredictable interface name: /usr/lib/systemd/network/zz-default.network Jan 20 23:52:06.115880 systemd[1]: Reached target network.target - Network. Jan 20 23:52:06.119984 systemd[1]: Starting systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd... Jan 20 23:52:06.122549 systemd[1]: Starting systemd-networkd-wait-online.service - Wait for Network to be Configured... Jan 20 23:52:06.125521 systemd[1]: Finished clean-ca-certificates.service - Clean up broken links in /etc/ssl/certs. Jan 20 23:52:06.127375 systemd[1]: Finished systemd-vconsole-setup.service - Virtual Console Setup. Jan 20 23:52:06.133143 systemd[1]: update-ca-certificates.service - Update CA bundle at /etc/ssl/certs/ca-certificates.crt was skipped because of an unmet condition check (ConditionPathIsSymbolicLink=!/etc/ssl/certs/ca-certificates.crt). Jan 20 23:52:06.137539 systemd-networkd[1586]: eth0: DHCPv4 address 10.0.2.209/25, gateway 10.0.2.129 acquired from 10.0.2.129 Jan 20 23:52:06.148864 systemd[1]: Finished systemd-networkd-persistent-storage.service - Enable Persistent Storage in systemd-networkd. Jan 20 23:52:06.610666 ldconfig[1578]: /sbin/ldconfig: /usr/lib/ld.so.conf is not an ELF file - it has the wrong magic bytes at the start. Jan 20 23:52:06.615062 systemd[1]: Finished ldconfig.service - Rebuild Dynamic Linker Cache. Jan 20 23:52:06.618724 systemd[1]: Starting systemd-update-done.service - Update is Completed... Jan 20 23:52:06.646954 systemd[1]: Finished systemd-update-done.service - Update is Completed. Jan 20 23:52:06.648184 systemd[1]: Reached target sysinit.target - System Initialization. Jan 20 23:52:06.649280 systemd[1]: Started motdgen.path - Watch for update engine configuration changes. Jan 20 23:52:06.650408 systemd[1]: Started user-cloudinit@var-lib-flatcar\x2dinstall-user_data.path - Watch for a cloud-config at /var/lib/flatcar-install/user_data. Jan 20 23:52:06.651689 systemd[1]: Started logrotate.timer - Daily rotation of log files. Jan 20 23:52:06.652796 systemd[1]: Started mdadm.timer - Weekly check for MD array's redundancy information.. Jan 20 23:52:06.653899 systemd[1]: Started systemd-sysupdate-reboot.timer - Reboot Automatically After System Update. Jan 20 23:52:06.655032 systemd[1]: Started systemd-sysupdate.timer - Automatic System Update. Jan 20 23:52:06.655980 systemd[1]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of Temporary Directories. Jan 20 23:52:06.657064 systemd[1]: update-engine-stub.timer - Update Engine Stub Timer was skipped because of an unmet condition check (ConditionPathExists=/usr/.noupdate). Jan 20 23:52:06.657097 systemd[1]: Reached target paths.target - Path Units. Jan 20 23:52:06.657875 systemd[1]: Reached target timers.target - Timer Units. Jan 20 23:52:06.659967 systemd[1]: Listening on dbus.socket - D-Bus System Message Bus Socket. Jan 20 23:52:06.662165 systemd[1]: Starting docker.socket - Docker Socket for the API... Jan 20 23:52:06.664709 systemd[1]: Listening on sshd-unix-local.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_UNIX Local). Jan 20 23:52:06.665942 systemd[1]: Listening on sshd-vsock.socket - OpenSSH Server Socket (systemd-ssh-generator, AF_VSOCK). Jan 20 23:52:06.667010 systemd[1]: Reached target ssh-access.target - SSH Access Available. Jan 20 23:52:06.673837 systemd[1]: Listening on sshd.socket - OpenSSH Server Socket. Jan 20 23:52:06.674979 systemd[1]: Listening on systemd-hostnamed.socket - Hostname Service Socket. Jan 20 23:52:06.676608 systemd[1]: Listening on docker.socket - Docker Socket for the API. Jan 20 23:52:06.677612 systemd[1]: Reached target sockets.target - Socket Units. Jan 20 23:52:06.678387 systemd[1]: Reached target basic.target - Basic System. Jan 20 23:52:06.679265 systemd[1]: addon-config@oem.service - Configure Addon /oem was skipped because no trigger condition checks were met. Jan 20 23:52:06.679293 systemd[1]: addon-run@oem.service - Run Addon /oem was skipped because no trigger condition checks were met. Jan 20 23:52:06.681944 systemd[1]: Starting chronyd.service - NTP client/server... Jan 20 23:52:06.683624 systemd[1]: Starting containerd.service - containerd container runtime... Jan 20 23:52:06.687611 systemd[1]: Starting coreos-metadata.service - Flatcar Metadata Agent... Jan 20 23:52:06.689501 systemd[1]: Starting dbus.service - D-Bus System Message Bus... Jan 20 23:52:06.691172 systemd[1]: Starting dracut-shutdown.service - Restore /run/initramfs on shutdown... Jan 20 23:52:06.694487 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 23:52:06.695203 systemd[1]: Starting enable-oem-cloudinit.service - Enable cloudinit... Jan 20 23:52:06.699540 systemd[1]: Starting extend-filesystems.service - Extend Filesystems... Jan 20 23:52:06.701418 systemd[1]: flatcar-setup-environment.service - Modifies /etc/environment for CoreOS was skipped because of an unmet condition check (ConditionPathExists=/oem/bin/flatcar-setup-environment). Jan 20 23:52:06.709955 systemd[1]: Starting motdgen.service - Generate /run/flatcar/motd... Jan 20 23:52:06.713172 systemd[1]: Starting prepare-helm.service - Unpack helm to /opt/bin... Jan 20 23:52:06.714910 jq[1642]: false Jan 20 23:52:06.715406 systemd[1]: Starting ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline... Jan 20 23:52:06.718275 systemd[1]: Starting sshd-keygen.service - Generate sshd host keys... Jan 20 23:52:06.723632 systemd[1]: Starting systemd-logind.service - User Login Management... Jan 20 23:52:06.724551 systemd[1]: tcsd.service - TCG Core Services Daemon was skipped because of an unmet condition check (ConditionPathExists=/dev/tpm0). Jan 20 23:52:06.724971 systemd[1]: cgroup compatibility translation between legacy and unified hierarchy settings activated. See cgroup-compat debug messages for details. Jan 20 23:52:06.726002 systemd[1]: Starting update-engine.service - Update Engine... Jan 20 23:52:06.728018 extend-filesystems[1643]: Found /dev/vda6 Jan 20 23:52:06.728116 systemd[1]: Starting update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition... Jan 20 23:52:06.730728 systemd[1]: Finished dracut-shutdown.service - Restore /run/initramfs on shutdown. Jan 20 23:52:06.731141 chronyd[1635]: chronyd version 4.8 starting (+CMDMON +REFCLOCK +RTC +PRIVDROP +SCFILTER -SIGND +NTS +SECHASH +IPV6 -DEBUG) Jan 20 23:52:06.732189 chronyd[1635]: Loaded seccomp filter (level 2) Jan 20 23:52:06.732788 systemd[1]: Started chronyd.service - NTP client/server. Jan 20 23:52:06.733905 systemd[1]: enable-oem-cloudinit.service: Skipped due to 'exec-condition'. Jan 20 23:52:06.734133 systemd[1]: Condition check resulted in enable-oem-cloudinit.service - Enable cloudinit being skipped. Jan 20 23:52:06.737860 systemd[1]: ssh-key-proc-cmdline.service: Deactivated successfully. Jan 20 23:52:06.738072 systemd[1]: Finished ssh-key-proc-cmdline.service - Install an ssh key from /proc/cmdline. Jan 20 23:52:06.745115 jq[1654]: true Jan 20 23:52:06.747172 extend-filesystems[1643]: Found /dev/vda9 Jan 20 23:52:06.755071 extend-filesystems[1643]: Checking size of /dev/vda9 Jan 20 23:52:06.758480 jq[1680]: true Jan 20 23:52:06.758393 systemd[1]: motdgen.service: Deactivated successfully. Jan 20 23:52:06.760107 systemd[1]: Finished motdgen.service - Generate /run/flatcar/motd. Jan 20 23:52:06.765835 extend-filesystems[1643]: Resized partition /dev/vda9 Jan 20 23:52:06.771443 extend-filesystems[1690]: resize2fs 1.47.3 (8-Jul-2025) Jan 20 23:52:06.775512 tar[1657]: linux-arm64/LICENSE Jan 20 23:52:06.775512 tar[1657]: linux-arm64/helm Jan 20 23:52:06.781545 kernel: EXT4-fs (vda9): resizing filesystem from 1617920 to 11516923 blocks Jan 20 23:52:06.793957 dbus-daemon[1638]: [system] SELinux support is enabled Jan 20 23:52:06.794255 systemd[1]: Started dbus.service - D-Bus System Message Bus. Jan 20 23:52:06.797721 systemd[1]: system-cloudinit@usr-share-oem-cloud\x2dconfig.yml.service - Load cloud-config from /usr/share/oem/cloud-config.yml was skipped because of an unmet condition check (ConditionFileNotEmpty=/usr/share/oem/cloud-config.yml). Jan 20 23:52:06.797754 systemd[1]: Reached target system-config.target - Load system-provided cloud configs. Jan 20 23:52:06.799514 systemd[1]: user-cloudinit-proc-cmdline.service - Load cloud-config from url defined in /proc/cmdline was skipped because of an unmet condition check (ConditionKernelCommandLine=cloud-config-url). Jan 20 23:52:06.799545 systemd[1]: Reached target user-config.target - Load user-provided cloud configs. Jan 20 23:52:06.807317 update_engine[1652]: I20260120 23:52:06.803323 1652 main.cc:92] Flatcar Update Engine starting Jan 20 23:52:06.813945 update_engine[1652]: I20260120 23:52:06.813711 1652 update_check_scheduler.cc:74] Next update check in 8m29s Jan 20 23:52:06.815164 systemd[1]: Started update-engine.service - Update Engine. Jan 20 23:52:06.823274 systemd[1]: Started locksmithd.service - Cluster reboot manager. Jan 20 23:52:06.846538 systemd-logind[1651]: New seat seat0. Jan 20 23:52:06.892612 locksmithd[1707]: locksmithd starting currentOperation="UPDATE_STATUS_IDLE" strategy="reboot" Jan 20 23:52:06.917835 systemd-logind[1651]: Watching system buttons on /dev/input/event0 (Power Button) Jan 20 23:52:06.917854 systemd-logind[1651]: Watching system buttons on /dev/input/event2 (QEMU QEMU USB Keyboard) Jan 20 23:52:06.918131 systemd[1]: Started systemd-logind.service - User Login Management. Jan 20 23:52:06.938335 containerd[1670]: time="2026-01-20T23:52:06Z" level=warning msg="Ignoring unknown key in TOML" column=1 error="strict mode: fields in the document are missing in the target struct" file=/usr/share/containerd/config.toml key=subreaper row=8 Jan 20 23:52:06.939364 containerd[1670]: time="2026-01-20T23:52:06.939327440Z" level=info msg="starting containerd" revision=fcd43222d6b07379a4be9786bda52438f0dd16a1 version=v2.1.5 Jan 20 23:52:06.939922 bash[1706]: Updated "/home/core/.ssh/authorized_keys" Jan 20 23:52:06.942997 systemd[1]: Finished update-ssh-keys-after-ignition.service - Run update-ssh-keys once after Ignition. Jan 20 23:52:06.950992 containerd[1670]: time="2026-01-20T23:52:06.950941200Z" level=warning msg="Configuration migrated from version 2, use `containerd config migrate` to avoid migration" t="10.04µs" Jan 20 23:52:06.950992 containerd[1670]: time="2026-01-20T23:52:06.950991440Z" level=info msg="loading plugin" id=io.containerd.content.v1.content type=io.containerd.content.v1 Jan 20 23:52:06.950980 systemd[1]: Starting sshkeys.service... Jan 20 23:52:06.951165 containerd[1670]: time="2026-01-20T23:52:06.951037880Z" level=info msg="loading plugin" id=io.containerd.image-verifier.v1.bindir type=io.containerd.image-verifier.v1 Jan 20 23:52:06.951165 containerd[1670]: time="2026-01-20T23:52:06.951056040Z" level=info msg="loading plugin" id=io.containerd.internal.v1.opt type=io.containerd.internal.v1 Jan 20 23:52:06.951498 containerd[1670]: time="2026-01-20T23:52:06.951469680Z" level=info msg="loading plugin" id=io.containerd.warning.v1.deprecations type=io.containerd.warning.v1 Jan 20 23:52:06.951587 containerd[1670]: time="2026-01-20T23:52:06.951510080Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 23:52:06.951676 containerd[1670]: time="2026-01-20T23:52:06.951655680Z" level=info msg="skip loading plugin" error="no scratch file generator: skip plugin" id=io.containerd.snapshotter.v1.blockfile type=io.containerd.snapshotter.v1 Jan 20 23:52:06.951701 containerd[1670]: time="2026-01-20T23:52:06.951676200Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 23:52:06.952050 containerd[1670]: time="2026-01-20T23:52:06.952024280Z" level=info msg="skip loading plugin" error="path /var/lib/containerd/io.containerd.snapshotter.v1.btrfs (ext4) must be a btrfs filesystem to be used with the btrfs snapshotter: skip plugin" id=io.containerd.snapshotter.v1.btrfs type=io.containerd.snapshotter.v1 Jan 20 23:52:06.952074 containerd[1670]: time="2026-01-20T23:52:06.952050560Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 23:52:06.952074 containerd[1670]: time="2026-01-20T23:52:06.952068320Z" level=info msg="skip loading plugin" error="devmapper not configured: skip plugin" id=io.containerd.snapshotter.v1.devmapper type=io.containerd.snapshotter.v1 Jan 20 23:52:06.952117 containerd[1670]: time="2026-01-20T23:52:06.952081600Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 23:52:06.952305 containerd[1670]: time="2026-01-20T23:52:06.952281880Z" level=info msg="skip loading plugin" error="EROFS unsupported, please `modprobe erofs`: skip plugin" id=io.containerd.snapshotter.v1.erofs type=io.containerd.snapshotter.v1 Jan 20 23:52:06.952331 containerd[1670]: time="2026-01-20T23:52:06.952304880Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.native type=io.containerd.snapshotter.v1 Jan 20 23:52:06.952412 containerd[1670]: time="2026-01-20T23:52:06.952395120Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.overlayfs type=io.containerd.snapshotter.v1 Jan 20 23:52:06.952782 containerd[1670]: time="2026-01-20T23:52:06.952744320Z" level=info msg="loading plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 23:52:06.952822 containerd[1670]: time="2026-01-20T23:52:06.952803200Z" level=info msg="skip loading plugin" error="lstat /var/lib/containerd/io.containerd.snapshotter.v1.zfs: no such file or directory: skip plugin" id=io.containerd.snapshotter.v1.zfs type=io.containerd.snapshotter.v1 Jan 20 23:52:06.952942 containerd[1670]: time="2026-01-20T23:52:06.952819800Z" level=info msg="loading plugin" id=io.containerd.event.v1.exchange type=io.containerd.event.v1 Jan 20 23:52:06.952942 containerd[1670]: time="2026-01-20T23:52:06.952916360Z" level=info msg="loading plugin" id=io.containerd.monitor.task.v1.cgroups type=io.containerd.monitor.task.v1 Jan 20 23:52:06.953131 containerd[1670]: time="2026-01-20T23:52:06.953110840Z" level=info msg="loading plugin" id=io.containerd.metadata.v1.bolt type=io.containerd.metadata.v1 Jan 20 23:52:06.953213 containerd[1670]: time="2026-01-20T23:52:06.953187200Z" level=info msg="metadata content store policy set" policy=shared Jan 20 23:52:06.981085 systemd[1]: Created slice system-coreos\x2dmetadata\x2dsshkeys.slice - Slice /system/coreos-metadata-sshkeys. Jan 20 23:52:06.984279 systemd[1]: Starting coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys)... Jan 20 23:52:06.994054 sshd_keygen[1673]: ssh-keygen: generating new host keys: RSA ECDSA ED25519 Jan 20 23:52:06.996539 containerd[1670]: time="2026-01-20T23:52:06.995664880Z" level=info msg="loading plugin" id=io.containerd.gc.v1.scheduler type=io.containerd.gc.v1 Jan 20 23:52:06.996539 containerd[1670]: time="2026-01-20T23:52:06.995735080Z" level=info msg="loading plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 23:52:06.996539 containerd[1670]: time="2026-01-20T23:52:06.995879640Z" level=info msg="skip loading plugin" error="could not find mkfs.erofs: exec: \"mkfs.erofs\": executable file not found in $PATH: skip plugin" id=io.containerd.differ.v1.erofs type=io.containerd.differ.v1 Jan 20 23:52:06.996539 containerd[1670]: time="2026-01-20T23:52:06.995896320Z" level=info msg="loading plugin" id=io.containerd.differ.v1.walking type=io.containerd.differ.v1 Jan 20 23:52:06.996539 containerd[1670]: time="2026-01-20T23:52:06.995911680Z" level=info msg="loading plugin" id=io.containerd.lease.v1.manager type=io.containerd.lease.v1 Jan 20 23:52:06.996539 containerd[1670]: time="2026-01-20T23:52:06.995923560Z" level=info msg="loading plugin" id=io.containerd.service.v1.containers-service type=io.containerd.service.v1 Jan 20 23:52:06.996539 containerd[1670]: time="2026-01-20T23:52:06.995935840Z" level=info msg="loading plugin" id=io.containerd.service.v1.content-service type=io.containerd.service.v1 Jan 20 23:52:06.996539 containerd[1670]: time="2026-01-20T23:52:06.995945600Z" level=info msg="loading plugin" id=io.containerd.service.v1.diff-service type=io.containerd.service.v1 Jan 20 23:52:06.996539 containerd[1670]: time="2026-01-20T23:52:06.995957960Z" level=info msg="loading plugin" id=io.containerd.service.v1.images-service type=io.containerd.service.v1 Jan 20 23:52:06.996539 containerd[1670]: time="2026-01-20T23:52:06.995971040Z" level=info msg="loading plugin" id=io.containerd.service.v1.introspection-service type=io.containerd.service.v1 Jan 20 23:52:06.996539 containerd[1670]: time="2026-01-20T23:52:06.995982920Z" level=info msg="loading plugin" id=io.containerd.service.v1.namespaces-service type=io.containerd.service.v1 Jan 20 23:52:06.996539 containerd[1670]: time="2026-01-20T23:52:06.995993520Z" level=info msg="loading plugin" id=io.containerd.service.v1.snapshots-service type=io.containerd.service.v1 Jan 20 23:52:06.996539 containerd[1670]: time="2026-01-20T23:52:06.996004440Z" level=info msg="loading plugin" id=io.containerd.shim.v1.manager type=io.containerd.shim.v1 Jan 20 23:52:06.996539 containerd[1670]: time="2026-01-20T23:52:06.996018080Z" level=info msg="loading plugin" id=io.containerd.runtime.v2.task type=io.containerd.runtime.v2 Jan 20 23:52:06.996953 containerd[1670]: time="2026-01-20T23:52:06.996139760Z" level=info msg="loading plugin" id=io.containerd.service.v1.tasks-service type=io.containerd.service.v1 Jan 20 23:52:06.996953 containerd[1670]: time="2026-01-20T23:52:06.996159720Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.containers type=io.containerd.grpc.v1 Jan 20 23:52:06.996953 containerd[1670]: time="2026-01-20T23:52:06.996181280Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.content type=io.containerd.grpc.v1 Jan 20 23:52:06.996953 containerd[1670]: time="2026-01-20T23:52:06.996195200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.diff type=io.containerd.grpc.v1 Jan 20 23:52:06.996953 containerd[1670]: time="2026-01-20T23:52:06.996211880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.events type=io.containerd.grpc.v1 Jan 20 23:52:06.996953 containerd[1670]: time="2026-01-20T23:52:06.996221920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.images type=io.containerd.grpc.v1 Jan 20 23:52:06.996953 containerd[1670]: time="2026-01-20T23:52:06.996233200Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.introspection type=io.containerd.grpc.v1 Jan 20 23:52:06.996953 containerd[1670]: time="2026-01-20T23:52:06.996244000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.leases type=io.containerd.grpc.v1 Jan 20 23:52:06.996953 containerd[1670]: time="2026-01-20T23:52:06.996254400Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.namespaces type=io.containerd.grpc.v1 Jan 20 23:52:06.996953 containerd[1670]: time="2026-01-20T23:52:06.996264600Z" level=info msg="loading plugin" id=io.containerd.sandbox.store.v1.local type=io.containerd.sandbox.store.v1 Jan 20 23:52:06.996953 containerd[1670]: time="2026-01-20T23:52:06.996274560Z" level=info msg="loading plugin" id=io.containerd.transfer.v1.local type=io.containerd.transfer.v1 Jan 20 23:52:06.996953 containerd[1670]: time="2026-01-20T23:52:06.996299200Z" level=info msg="loading plugin" id=io.containerd.cri.v1.images type=io.containerd.cri.v1 Jan 20 23:52:06.996953 containerd[1670]: time="2026-01-20T23:52:06.996336280Z" level=info msg="Get image filesystem path \"/var/lib/containerd/io.containerd.snapshotter.v1.overlayfs\" for snapshotter \"overlayfs\"" Jan 20 23:52:06.996953 containerd[1670]: time="2026-01-20T23:52:06.996352400Z" level=info msg="Start snapshots syncer" Jan 20 23:52:06.996953 containerd[1670]: time="2026-01-20T23:52:06.996371920Z" level=info msg="loading plugin" id=io.containerd.cri.v1.runtime type=io.containerd.cri.v1 Jan 20 23:52:06.997244 containerd[1670]: time="2026-01-20T23:52:06.996637040Z" level=info msg="starting cri plugin" config="{\"containerd\":{\"defaultRuntimeName\":\"runc\",\"runtimes\":{\"runc\":{\"runtimeType\":\"io.containerd.runc.v2\",\"runtimePath\":\"\",\"PodAnnotations\":null,\"ContainerAnnotations\":null,\"options\":{\"BinaryName\":\"\",\"CriuImagePath\":\"\",\"CriuWorkPath\":\"\",\"IoGid\":0,\"IoUid\":0,\"NoNewKeyring\":false,\"Root\":\"\",\"ShimCgroup\":\"\",\"SystemdCgroup\":true},\"privileged_without_host_devices\":false,\"privileged_without_host_devices_all_devices_allowed\":false,\"cgroupWritable\":false,\"baseRuntimeSpec\":\"\",\"cniConfDir\":\"\",\"cniMaxConfNum\":0,\"snapshotter\":\"\",\"sandboxer\":\"podsandbox\",\"io_type\":\"\"}},\"ignoreBlockIONotEnabledErrors\":false,\"ignoreRdtNotEnabledErrors\":false},\"cni\":{\"binDir\":\"\",\"binDirs\":[\"/opt/cni/bin\"],\"confDir\":\"/etc/cni/net.d\",\"maxConfNum\":1,\"setupSerially\":false,\"confTemplate\":\"\",\"ipPref\":\"\",\"useInternalLoopback\":false},\"enableSelinux\":true,\"selinuxCategoryRange\":1024,\"maxContainerLogLineSize\":16384,\"disableApparmor\":false,\"restrictOOMScoreAdj\":false,\"disableProcMount\":false,\"unsetSeccompProfile\":\"\",\"tolerateMissingHugetlbController\":true,\"disableHugetlbController\":true,\"device_ownership_from_security_context\":false,\"ignoreImageDefinedVolumes\":false,\"netnsMountsUnderStateDir\":false,\"enableUnprivilegedPorts\":true,\"enableUnprivilegedICMP\":true,\"enableCDI\":true,\"cdiSpecDirs\":[\"/etc/cdi\",\"/var/run/cdi\"],\"drainExecSyncIOTimeout\":\"0s\",\"ignoreDeprecationWarnings\":null,\"containerdRootDir\":\"/var/lib/containerd\",\"containerdEndpoint\":\"/run/containerd/containerd.sock\",\"rootDir\":\"/var/lib/containerd/io.containerd.grpc.v1.cri\",\"stateDir\":\"/run/containerd/io.containerd.grpc.v1.cri\"}" Jan 20 23:52:06.997244 containerd[1670]: time="2026-01-20T23:52:06.996688640Z" level=info msg="loading plugin" id=io.containerd.podsandbox.controller.v1.podsandbox type=io.containerd.podsandbox.controller.v1 Jan 20 23:52:06.997353 containerd[1670]: time="2026-01-20T23:52:06.996747280Z" level=info msg="loading plugin" id=io.containerd.sandbox.controller.v1.shim type=io.containerd.sandbox.controller.v1 Jan 20 23:52:06.997353 containerd[1670]: time="2026-01-20T23:52:06.996947920Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandbox-controllers type=io.containerd.grpc.v1 Jan 20 23:52:06.997353 containerd[1670]: time="2026-01-20T23:52:06.996975560Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.sandboxes type=io.containerd.grpc.v1 Jan 20 23:52:06.997353 containerd[1670]: time="2026-01-20T23:52:06.997005360Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.snapshots type=io.containerd.grpc.v1 Jan 20 23:52:06.997353 containerd[1670]: time="2026-01-20T23:52:06.997016160Z" level=info msg="loading plugin" id=io.containerd.streaming.v1.manager type=io.containerd.streaming.v1 Jan 20 23:52:06.997353 containerd[1670]: time="2026-01-20T23:52:06.997031000Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.streaming type=io.containerd.grpc.v1 Jan 20 23:52:06.997353 containerd[1670]: time="2026-01-20T23:52:06.997044080Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.tasks type=io.containerd.grpc.v1 Jan 20 23:52:06.997353 containerd[1670]: time="2026-01-20T23:52:06.997055600Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.transfer type=io.containerd.grpc.v1 Jan 20 23:52:06.997353 containerd[1670]: time="2026-01-20T23:52:06.997067520Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.version type=io.containerd.grpc.v1 Jan 20 23:52:06.997353 containerd[1670]: time="2026-01-20T23:52:06.997088320Z" level=info msg="loading plugin" id=io.containerd.monitor.container.v1.restart type=io.containerd.monitor.container.v1 Jan 20 23:52:06.997353 containerd[1670]: time="2026-01-20T23:52:06.997116400Z" level=info msg="loading plugin" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 23:52:06.997353 containerd[1670]: time="2026-01-20T23:52:06.997131320Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.tracing.processor.v1.otlp type=io.containerd.tracing.processor.v1 Jan 20 23:52:06.997353 containerd[1670]: time="2026-01-20T23:52:06.997140120Z" level=info msg="loading plugin" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 23:52:06.999604 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 23:52:06.999780 containerd[1670]: time="2026-01-20T23:52:06.997150800Z" level=info msg="skip loading plugin" error="skip plugin: tracing endpoint not configured" id=io.containerd.internal.v1.tracing type=io.containerd.internal.v1 Jan 20 23:52:06.999780 containerd[1670]: time="2026-01-20T23:52:06.997159040Z" level=info msg="loading plugin" id=io.containerd.ttrpc.v1.otelttrpc type=io.containerd.ttrpc.v1 Jan 20 23:52:06.999780 containerd[1670]: time="2026-01-20T23:52:06.997173840Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.healthcheck type=io.containerd.grpc.v1 Jan 20 23:52:06.999780 containerd[1670]: time="2026-01-20T23:52:06.997185000Z" level=info msg="loading plugin" id=io.containerd.nri.v1.nri type=io.containerd.nri.v1 Jan 20 23:52:06.999780 containerd[1670]: time="2026-01-20T23:52:06.997299760Z" level=info msg="runtime interface created" Jan 20 23:52:06.999780 containerd[1670]: time="2026-01-20T23:52:06.997305640Z" level=info msg="created NRI interface" Jan 20 23:52:06.999780 containerd[1670]: time="2026-01-20T23:52:06.997315880Z" level=info msg="loading plugin" id=io.containerd.grpc.v1.cri type=io.containerd.grpc.v1 Jan 20 23:52:06.999780 containerd[1670]: time="2026-01-20T23:52:06.997327080Z" level=info msg="Connect containerd service" Jan 20 23:52:06.999780 containerd[1670]: time="2026-01-20T23:52:06.997353400Z" level=info msg="using experimental NRI integration - disable nri plugin to prevent this" Jan 20 23:52:06.999780 containerd[1670]: time="2026-01-20T23:52:06.998096160Z" level=error msg="failed to load cni during init, please check CRI plugin status before setting up network for pods" error="cni config load failed: no network config found in /etc/cni/net.d: cni plugin not initialized: failed to load cni config" Jan 20 23:52:07.020652 systemd[1]: Finished sshd-keygen.service - Generate sshd host keys. Jan 20 23:52:07.025237 systemd[1]: Starting issuegen.service - Generate /run/issue... Jan 20 23:52:07.046669 systemd[1]: issuegen.service: Deactivated successfully. Jan 20 23:52:07.047978 systemd[1]: Finished issuegen.service - Generate /run/issue. Jan 20 23:52:07.053765 systemd[1]: Starting systemd-user-sessions.service - Permit User Sessions... Jan 20 23:52:07.072135 systemd[1]: Finished systemd-user-sessions.service - Permit User Sessions. Jan 20 23:52:07.075447 systemd[1]: Started getty@tty1.service - Getty on tty1. Jan 20 23:52:07.078093 systemd[1]: Started serial-getty@ttyAMA0.service - Serial Getty on ttyAMA0. Jan 20 23:52:07.081689 systemd[1]: Reached target getty.target - Login Prompts. Jan 20 23:52:07.106611 containerd[1670]: time="2026-01-20T23:52:07.106493240Z" level=info msg="Start subscribing containerd event" Jan 20 23:52:07.106722 containerd[1670]: time="2026-01-20T23:52:07.106687280Z" level=info msg="Start recovering state" Jan 20 23:52:07.106897 containerd[1670]: time="2026-01-20T23:52:07.106785000Z" level=info msg="Start event monitor" Jan 20 23:52:07.106924 containerd[1670]: time="2026-01-20T23:52:07.106912200Z" level=info msg="Start cni network conf syncer for default" Jan 20 23:52:07.106973 containerd[1670]: time="2026-01-20T23:52:07.106928400Z" level=info msg="Start streaming server" Jan 20 23:52:07.106973 containerd[1670]: time="2026-01-20T23:52:07.106940480Z" level=info msg="Registered namespace \"k8s.io\" with NRI" Jan 20 23:52:07.106973 containerd[1670]: time="2026-01-20T23:52:07.106949800Z" level=info msg="runtime interface starting up..." Jan 20 23:52:07.106973 containerd[1670]: time="2026-01-20T23:52:07.106956720Z" level=info msg="starting plugins..." Jan 20 23:52:07.107099 containerd[1670]: time="2026-01-20T23:52:07.106975520Z" level=info msg="Synchronizing NRI (plugin) with current runtime state" Jan 20 23:52:07.107289 containerd[1670]: time="2026-01-20T23:52:07.107148880Z" level=info msg=serving... address=/run/containerd/containerd.sock.ttrpc Jan 20 23:52:07.107344 containerd[1670]: time="2026-01-20T23:52:07.107326600Z" level=info msg=serving... address=/run/containerd/containerd.sock Jan 20 23:52:07.108474 containerd[1670]: time="2026-01-20T23:52:07.108057880Z" level=info msg="containerd successfully booted in 0.170084s" Jan 20 23:52:07.108305 systemd[1]: Started containerd.service - containerd container runtime. Jan 20 23:52:07.138488 kernel: EXT4-fs (vda9): resized filesystem to 11516923 Jan 20 23:52:07.160060 extend-filesystems[1690]: Filesystem at /dev/vda9 is mounted on /; on-line resizing required Jan 20 23:52:07.160060 extend-filesystems[1690]: old_desc_blocks = 1, new_desc_blocks = 6 Jan 20 23:52:07.160060 extend-filesystems[1690]: The filesystem on /dev/vda9 is now 11516923 (4k) blocks long. Jan 20 23:52:07.164391 extend-filesystems[1643]: Resized filesystem in /dev/vda9 Jan 20 23:52:07.161539 systemd[1]: extend-filesystems.service: Deactivated successfully. Jan 20 23:52:07.161804 systemd[1]: Finished extend-filesystems.service - Extend Filesystems. Jan 20 23:52:07.228218 tar[1657]: linux-arm64/README.md Jan 20 23:52:07.249570 systemd[1]: Finished prepare-helm.service - Unpack helm to /opt/bin. Jan 20 23:52:07.446774 systemd-networkd[1586]: eth0: Gained IPv6LL Jan 20 23:52:07.449366 systemd[1]: Finished systemd-networkd-wait-online.service - Wait for Network to be Configured. Jan 20 23:52:07.452041 systemd[1]: Reached target network-online.target - Network is Online. Jan 20 23:52:07.454977 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 23:52:07.457315 systemd[1]: Starting nvidia.service - NVIDIA Configure Service... Jan 20 23:52:07.486171 systemd[1]: Finished nvidia.service - NVIDIA Configure Service. Jan 20 23:52:07.705490 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 23:52:08.016515 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 23:52:08.254027 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 23:52:08.257795 (kubelet)[1779]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 23:52:08.758128 kubelet[1779]: E0120 23:52:08.758052 1779 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 23:52:08.761223 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 23:52:08.761354 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 23:52:08.761960 systemd[1]: kubelet.service: Consumed 770ms CPU time, 255.2M memory peak. Jan 20 23:52:09.716487 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 23:52:10.029579 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 23:52:13.726566 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 23:52:13.735079 coreos-metadata[1637]: Jan 20 23:52:13.734 WARN failed to locate config-drive, using the metadata service API instead Jan 20 23:52:13.751199 coreos-metadata[1637]: Jan 20 23:52:13.751 INFO Fetching http://169.254.169.254/openstack/2012-08-10/meta_data.json: Attempt #1 Jan 20 23:52:14.040534 kernel: /dev/disk/by-label/config-2: Can't lookup blockdev Jan 20 23:52:14.045753 coreos-metadata[1722]: Jan 20 23:52:14.045 WARN failed to locate config-drive, using the metadata service API instead Jan 20 23:52:14.058847 coreos-metadata[1722]: Jan 20 23:52:14.058 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys: Attempt #1 Jan 20 23:52:15.965534 systemd[1]: Created slice system-sshd.slice - Slice /system/sshd. Jan 20 23:52:15.967169 systemd[1]: Started sshd@0-10.0.2.209:22-20.161.92.111:35440.service - OpenSSH per-connection server daemon (20.161.92.111:35440). Jan 20 23:52:16.609287 sshd[1798]: Accepted publickey for core from 20.161.92.111 port 35440 ssh2: RSA SHA256:AolhJ8Cq1lJNPaiSsM8U1fo/mZtfN0MsM/RIruU7I4I Jan 20 23:52:16.613711 sshd-session[1798]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:52:16.624358 systemd-logind[1651]: New session 1 of user core. Jan 20 23:52:16.625656 systemd[1]: Created slice user-500.slice - User Slice of UID 500. Jan 20 23:52:16.626726 systemd[1]: Starting user-runtime-dir@500.service - User Runtime Directory /run/user/500... Jan 20 23:52:16.647486 systemd[1]: Finished user-runtime-dir@500.service - User Runtime Directory /run/user/500. Jan 20 23:52:16.649806 systemd[1]: Starting user@500.service - User Manager for UID 500... Jan 20 23:52:16.677691 (systemd)[1804]: pam_unix(systemd-user:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:52:16.680169 systemd-logind[1651]: New session 2 of user core. Jan 20 23:52:16.817154 systemd[1804]: Queued start job for default target default.target. Jan 20 23:52:16.825768 systemd[1804]: Created slice app.slice - User Application Slice. Jan 20 23:52:16.825804 systemd[1804]: Started systemd-tmpfiles-clean.timer - Daily Cleanup of User's Temporary Directories. Jan 20 23:52:16.825815 systemd[1804]: Reached target paths.target - Paths. Jan 20 23:52:16.825870 systemd[1804]: Reached target timers.target - Timers. Jan 20 23:52:16.827145 systemd[1804]: Starting dbus.socket - D-Bus User Message Bus Socket... Jan 20 23:52:16.827948 systemd[1804]: Starting systemd-tmpfiles-setup.service - Create User Files and Directories... Jan 20 23:52:16.837685 systemd[1804]: Listening on dbus.socket - D-Bus User Message Bus Socket. Jan 20 23:52:16.837788 systemd[1804]: Reached target sockets.target - Sockets. Jan 20 23:52:16.839500 systemd[1804]: Finished systemd-tmpfiles-setup.service - Create User Files and Directories. Jan 20 23:52:16.839615 systemd[1804]: Reached target basic.target - Basic System. Jan 20 23:52:16.839677 systemd[1804]: Reached target default.target - Main User Target. Jan 20 23:52:16.839707 systemd[1804]: Startup finished in 154ms. Jan 20 23:52:16.839796 systemd[1]: Started user@500.service - User Manager for UID 500. Jan 20 23:52:16.850946 systemd[1]: Started session-1.scope - Session 1 of User core. Jan 20 23:52:17.150543 systemd[1]: Started sshd@1-10.0.2.209:22-20.161.92.111:60260.service - OpenSSH per-connection server daemon (20.161.92.111:60260). Jan 20 23:52:17.317628 coreos-metadata[1722]: Jan 20 23:52:17.317 INFO Fetch successful Jan 20 23:52:17.317628 coreos-metadata[1722]: Jan 20 23:52:17.317 INFO Fetching http://169.254.169.254/latest/meta-data/public-keys/0/openssh-key: Attempt #1 Jan 20 23:52:17.319937 coreos-metadata[1637]: Jan 20 23:52:17.319 INFO Fetch successful Jan 20 23:52:17.319937 coreos-metadata[1637]: Jan 20 23:52:17.319 INFO Fetching http://169.254.169.254/latest/meta-data/hostname: Attempt #1 Jan 20 23:52:17.674688 sshd[1819]: Accepted publickey for core from 20.161.92.111 port 60260 ssh2: RSA SHA256:AolhJ8Cq1lJNPaiSsM8U1fo/mZtfN0MsM/RIruU7I4I Jan 20 23:52:17.676100 sshd-session[1819]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:52:17.680952 systemd-logind[1651]: New session 3 of user core. Jan 20 23:52:17.691949 systemd[1]: Started session-3.scope - Session 3 of User core. Jan 20 23:52:17.963443 sshd[1823]: Connection closed by 20.161.92.111 port 60260 Jan 20 23:52:17.963612 sshd-session[1819]: pam_unix(sshd:session): session closed for user core Jan 20 23:52:17.967779 systemd[1]: sshd@1-10.0.2.209:22-20.161.92.111:60260.service: Deactivated successfully. Jan 20 23:52:17.969349 systemd[1]: session-3.scope: Deactivated successfully. Jan 20 23:52:17.971648 systemd-logind[1651]: Session 3 logged out. Waiting for processes to exit. Jan 20 23:52:17.973228 systemd-logind[1651]: Removed session 3. Jan 20 23:52:18.078524 systemd[1]: Started sshd@2-10.0.2.209:22-20.161.92.111:60262.service - OpenSSH per-connection server daemon (20.161.92.111:60262). Jan 20 23:52:18.635482 sshd[1829]: Accepted publickey for core from 20.161.92.111 port 60262 ssh2: RSA SHA256:AolhJ8Cq1lJNPaiSsM8U1fo/mZtfN0MsM/RIruU7I4I Jan 20 23:52:18.636908 sshd-session[1829]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:52:18.640921 systemd-logind[1651]: New session 4 of user core. Jan 20 23:52:18.658718 systemd[1]: Started session-4.scope - Session 4 of User core. Jan 20 23:52:18.839083 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 1. Jan 20 23:52:18.840589 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 23:52:18.939868 sshd[1833]: Connection closed by 20.161.92.111 port 60262 Jan 20 23:52:18.939658 sshd-session[1829]: pam_unix(sshd:session): session closed for user core Jan 20 23:52:18.943787 systemd[1]: sshd@2-10.0.2.209:22-20.161.92.111:60262.service: Deactivated successfully. Jan 20 23:52:18.945382 systemd[1]: session-4.scope: Deactivated successfully. Jan 20 23:52:18.947507 systemd-logind[1651]: Session 4 logged out. Waiting for processes to exit. Jan 20 23:52:18.948432 systemd-logind[1651]: Removed session 4. Jan 20 23:52:18.997235 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 23:52:19.001561 (kubelet)[1846]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 23:52:19.045468 kubelet[1846]: E0120 23:52:19.045400 1846 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 23:52:19.048691 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 23:52:19.048880 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 23:52:19.049327 systemd[1]: kubelet.service: Consumed 147ms CPU time, 107.2M memory peak. Jan 20 23:52:20.316795 coreos-metadata[1722]: Jan 20 23:52:20.316 INFO Fetch successful Jan 20 23:52:20.388992 unknown[1722]: wrote ssh authorized keys file for user: core Jan 20 23:52:20.419827 update-ssh-keys[1855]: Updated "/home/core/.ssh/authorized_keys" Jan 20 23:52:20.422523 systemd[1]: Finished coreos-metadata-sshkeys@core.service - Flatcar Metadata Agent (SSH Keys). Jan 20 23:52:20.423894 systemd[1]: Finished sshkeys.service. Jan 20 23:52:20.897950 coreos-metadata[1637]: Jan 20 23:52:20.897 INFO Fetch successful Jan 20 23:52:20.897950 coreos-metadata[1637]: Jan 20 23:52:20.897 INFO Fetching http://169.254.169.254/latest/meta-data/instance-id: Attempt #1 Jan 20 23:52:22.131189 coreos-metadata[1637]: Jan 20 23:52:22.131 INFO Fetch successful Jan 20 23:52:22.131189 coreos-metadata[1637]: Jan 20 23:52:22.131 INFO Fetching http://169.254.169.254/latest/meta-data/instance-type: Attempt #1 Jan 20 23:52:23.390981 coreos-metadata[1637]: Jan 20 23:52:23.390 INFO Fetch successful Jan 20 23:52:23.390981 coreos-metadata[1637]: Jan 20 23:52:23.390 INFO Fetching http://169.254.169.254/latest/meta-data/local-ipv4: Attempt #1 Jan 20 23:52:25.207644 coreos-metadata[1637]: Jan 20 23:52:25.207 INFO Fetch successful Jan 20 23:52:25.207644 coreos-metadata[1637]: Jan 20 23:52:25.207 INFO Fetching http://169.254.169.254/latest/meta-data/public-ipv4: Attempt #1 Jan 20 23:52:25.828289 coreos-metadata[1637]: Jan 20 23:52:25.828 INFO Fetch successful Jan 20 23:52:25.876596 systemd[1]: Finished coreos-metadata.service - Flatcar Metadata Agent. Jan 20 23:52:25.877143 systemd[1]: packet-phone-home.service - Report Success to Packet was skipped because no trigger condition checks were met. Jan 20 23:52:25.877287 systemd[1]: Reached target multi-user.target - Multi-User System. Jan 20 23:52:25.877492 systemd[1]: Startup finished in 2.497s (kernel) + 14.850s (initrd) + 21.914s (userspace) = 39.263s. Jan 20 23:52:29.052904 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 2. Jan 20 23:52:29.054433 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 23:52:29.055559 systemd[1]: Started sshd@3-10.0.2.209:22-20.161.92.111:40472.service - OpenSSH per-connection server daemon (20.161.92.111:40472). Jan 20 23:52:29.185861 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 23:52:29.189743 (kubelet)[1875]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 23:52:29.226183 kubelet[1875]: E0120 23:52:29.226135 1875 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 23:52:29.228899 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 23:52:29.229037 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 23:52:29.229390 systemd[1]: kubelet.service: Consumed 140ms CPU time, 107.7M memory peak. Jan 20 23:52:29.604355 sshd[1865]: Accepted publickey for core from 20.161.92.111 port 40472 ssh2: RSA SHA256:AolhJ8Cq1lJNPaiSsM8U1fo/mZtfN0MsM/RIruU7I4I Jan 20 23:52:29.606225 sshd-session[1865]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:52:29.610327 systemd-logind[1651]: New session 5 of user core. Jan 20 23:52:29.617641 systemd[1]: Started session-5.scope - Session 5 of User core. Jan 20 23:52:29.908612 sshd[1884]: Connection closed by 20.161.92.111 port 40472 Jan 20 23:52:29.908522 sshd-session[1865]: pam_unix(sshd:session): session closed for user core Jan 20 23:52:29.913112 systemd[1]: sshd@3-10.0.2.209:22-20.161.92.111:40472.service: Deactivated successfully. Jan 20 23:52:29.914719 systemd[1]: session-5.scope: Deactivated successfully. Jan 20 23:52:29.917407 systemd-logind[1651]: Session 5 logged out. Waiting for processes to exit. Jan 20 23:52:29.918515 systemd-logind[1651]: Removed session 5. Jan 20 23:52:30.017258 systemd[1]: Started sshd@4-10.0.2.209:22-20.161.92.111:40478.service - OpenSSH per-connection server daemon (20.161.92.111:40478). Jan 20 23:52:30.515293 chronyd[1635]: Selected source PHC0 Jan 20 23:52:30.549037 sshd[1890]: Accepted publickey for core from 20.161.92.111 port 40478 ssh2: RSA SHA256:AolhJ8Cq1lJNPaiSsM8U1fo/mZtfN0MsM/RIruU7I4I Jan 20 23:52:30.550338 sshd-session[1890]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:52:30.553882 systemd-logind[1651]: New session 6 of user core. Jan 20 23:52:30.568764 systemd[1]: Started session-6.scope - Session 6 of User core. Jan 20 23:52:30.809432 sshd[1894]: Connection closed by 20.161.92.111 port 40478 Jan 20 23:52:30.810132 sshd-session[1890]: pam_unix(sshd:session): session closed for user core Jan 20 23:52:30.814327 systemd[1]: sshd@4-10.0.2.209:22-20.161.92.111:40478.service: Deactivated successfully. Jan 20 23:52:30.815879 systemd[1]: session-6.scope: Deactivated successfully. Jan 20 23:52:30.816654 systemd-logind[1651]: Session 6 logged out. Waiting for processes to exit. Jan 20 23:52:30.818041 systemd-logind[1651]: Removed session 6. Jan 20 23:52:30.910488 systemd[1]: Started sshd@5-10.0.2.209:22-20.161.92.111:40486.service - OpenSSH per-connection server daemon (20.161.92.111:40486). Jan 20 23:52:31.393931 sshd[1900]: Accepted publickey for core from 20.161.92.111 port 40486 ssh2: RSA SHA256:AolhJ8Cq1lJNPaiSsM8U1fo/mZtfN0MsM/RIruU7I4I Jan 20 23:52:31.395202 sshd-session[1900]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:52:31.398823 systemd-logind[1651]: New session 7 of user core. Jan 20 23:52:31.409798 systemd[1]: Started session-7.scope - Session 7 of User core. Jan 20 23:52:31.658146 sshd[1904]: Connection closed by 20.161.92.111 port 40486 Jan 20 23:52:31.658476 sshd-session[1900]: pam_unix(sshd:session): session closed for user core Jan 20 23:52:31.661692 systemd[1]: sshd@5-10.0.2.209:22-20.161.92.111:40486.service: Deactivated successfully. Jan 20 23:52:31.663236 systemd[1]: session-7.scope: Deactivated successfully. Jan 20 23:52:31.664567 systemd-logind[1651]: Session 7 logged out. Waiting for processes to exit. Jan 20 23:52:31.666064 systemd-logind[1651]: Removed session 7. Jan 20 23:52:31.772647 systemd[1]: Started sshd@6-10.0.2.209:22-20.161.92.111:40498.service - OpenSSH per-connection server daemon (20.161.92.111:40498). Jan 20 23:52:32.272136 sshd[1910]: Accepted publickey for core from 20.161.92.111 port 40498 ssh2: RSA SHA256:AolhJ8Cq1lJNPaiSsM8U1fo/mZtfN0MsM/RIruU7I4I Jan 20 23:52:32.273405 sshd-session[1910]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:52:32.277812 systemd-logind[1651]: New session 8 of user core. Jan 20 23:52:32.294938 systemd[1]: Started session-8.scope - Session 8 of User core. Jan 20 23:52:32.470938 sudo[1915]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/setenforce 1 Jan 20 23:52:32.471194 sudo[1915]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 23:52:32.486347 sudo[1915]: pam_unix(sudo:session): session closed for user root Jan 20 23:52:32.578220 sshd[1914]: Connection closed by 20.161.92.111 port 40498 Jan 20 23:52:32.578442 sshd-session[1910]: pam_unix(sshd:session): session closed for user core Jan 20 23:52:32.582709 systemd[1]: sshd@6-10.0.2.209:22-20.161.92.111:40498.service: Deactivated successfully. Jan 20 23:52:32.584417 systemd[1]: session-8.scope: Deactivated successfully. Jan 20 23:52:32.587508 systemd-logind[1651]: Session 8 logged out. Waiting for processes to exit. Jan 20 23:52:32.588920 systemd-logind[1651]: Removed session 8. Jan 20 23:52:32.690929 systemd[1]: Started sshd@7-10.0.2.209:22-20.161.92.111:52034.service - OpenSSH per-connection server daemon (20.161.92.111:52034). Jan 20 23:52:33.189558 sshd[1922]: Accepted publickey for core from 20.161.92.111 port 52034 ssh2: RSA SHA256:AolhJ8Cq1lJNPaiSsM8U1fo/mZtfN0MsM/RIruU7I4I Jan 20 23:52:33.190845 sshd-session[1922]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:52:33.195439 systemd-logind[1651]: New session 9 of user core. Jan 20 23:52:33.201668 systemd[1]: Started session-9.scope - Session 9 of User core. Jan 20 23:52:33.377679 sudo[1928]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/rm -rf /etc/audit/rules.d/80-selinux.rules /etc/audit/rules.d/99-default.rules Jan 20 23:52:33.377932 sudo[1928]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 23:52:33.380506 sudo[1928]: pam_unix(sudo:session): session closed for user root Jan 20 23:52:33.386485 sudo[1927]: core : PWD=/home/core ; USER=root ; COMMAND=/usr/sbin/systemctl restart audit-rules Jan 20 23:52:33.386734 sudo[1927]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 23:52:33.393376 systemd[1]: Starting audit-rules.service - Load Audit Rules... Jan 20 23:52:33.427000 audit: CONFIG_CHANGE auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 20 23:52:33.429012 kernel: kauditd_printk_skb: 186 callbacks suppressed Jan 20 23:52:33.429063 kernel: audit: type=1305 audit(1768953153.427:230): auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 op=remove_rule key=(null) list=5 res=1 Jan 20 23:52:33.427000 audit[1952]: SYSCALL arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd532c7b0 a2=420 a3=0 items=0 ppid=1933 pid=1952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:33.431854 augenrules[1952]: No rules Jan 20 23:52:33.433328 systemd[1]: audit-rules.service: Deactivated successfully. Jan 20 23:52:33.433431 kernel: audit: type=1300 audit(1768953153.427:230): arch=c00000b7 syscall=206 success=yes exit=1056 a0=3 a1=ffffd532c7b0 a2=420 a3=0 items=0 ppid=1933 pid=1952 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="auditctl" exe="/usr/bin/auditctl" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:33.433479 kernel: audit: type=1327 audit(1768953153.427:230): proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 23:52:33.427000 audit: PROCTITLE proctitle=2F7362696E2F617564697463746C002D52002F6574632F61756469742F61756469742E72756C6573 Jan 20 23:52:33.433659 systemd[1]: Finished audit-rules.service - Load Audit Rules. Jan 20 23:52:33.434740 kernel: audit: type=1130 audit(1768953153.432:231): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:33.432000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:33.435042 sudo[1927]: pam_unix(sudo:session): session closed for user root Jan 20 23:52:33.432000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:33.439182 kernel: audit: type=1131 audit(1768953153.432:232): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=audit-rules comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:33.439219 kernel: audit: type=1106 audit(1768953153.433:233): pid=1927 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 23:52:33.433000 audit[1927]: USER_END pid=1927 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 23:52:33.433000 audit[1927]: CRED_DISP pid=1927 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 23:52:33.443624 kernel: audit: type=1104 audit(1768953153.433:234): pid=1927 uid=500 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 23:52:33.527214 sshd[1926]: Connection closed by 20.161.92.111 port 52034 Jan 20 23:52:33.527510 sshd-session[1922]: pam_unix(sshd:session): session closed for user core Jan 20 23:52:33.528000 audit[1922]: USER_END pid=1922 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:52:33.531449 systemd[1]: sshd@7-10.0.2.209:22-20.161.92.111:52034.service: Deactivated successfully. Jan 20 23:52:33.528000 audit[1922]: CRED_DISP pid=1922 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:52:33.533052 systemd[1]: session-9.scope: Deactivated successfully. Jan 20 23:52:33.533868 systemd-logind[1651]: Session 9 logged out. Waiting for processes to exit. Jan 20 23:52:33.534939 kernel: audit: type=1106 audit(1768953153.528:235): pid=1922 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:52:33.535001 kernel: audit: type=1104 audit(1768953153.528:236): pid=1922 uid=0 auid=500 ses=9 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:52:33.535018 kernel: audit: type=1131 audit(1768953153.530:237): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.2.209:22-20.161.92.111:52034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:33.530000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@7-10.0.2.209:22-20.161.92.111:52034 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:33.535718 systemd-logind[1651]: Removed session 9. Jan 20 23:52:33.630188 systemd[1]: Started sshd@8-10.0.2.209:22-20.161.92.111:52042.service - OpenSSH per-connection server daemon (20.161.92.111:52042). Jan 20 23:52:33.629000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.2.209:22-20.161.92.111:52042 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:34.128000 audit[1961]: USER_ACCT pid=1961 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:52:34.129981 sshd[1961]: Accepted publickey for core from 20.161.92.111 port 52042 ssh2: RSA SHA256:AolhJ8Cq1lJNPaiSsM8U1fo/mZtfN0MsM/RIruU7I4I Jan 20 23:52:34.130000 audit[1961]: CRED_ACQ pid=1961 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:52:34.130000 audit[1961]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe0f6f360 a2=3 a3=0 items=0 ppid=1 pid=1961 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=10 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:34.130000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:52:34.131617 sshd-session[1961]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:52:34.135143 systemd-logind[1651]: New session 10 of user core. Jan 20 23:52:34.154352 systemd[1]: Started session-10.scope - Session 10 of User core. Jan 20 23:52:34.156000 audit[1961]: USER_START pid=1961 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:52:34.156000 audit[1965]: CRED_ACQ pid=1965 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:52:34.318000 audit[1966]: USER_ACCT pid=1966 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_unix,pam_faillock acct="core" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 23:52:34.319515 sudo[1966]: core : PWD=/home/core ; USER=root ; COMMAND=/home/core/install.sh Jan 20 23:52:34.319000 audit[1966]: CRED_REFR pid=1966 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 23:52:34.319000 audit[1966]: USER_START pid=1966 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 23:52:34.319766 sudo[1966]: pam_unix(sudo:session): session opened for user root(uid=0) by core(uid=500) Jan 20 23:52:34.619243 systemd[1]: Starting docker.service - Docker Application Container Engine... Jan 20 23:52:34.638050 (dockerd)[1987]: docker.service: Referenced but unset environment variable evaluates to an empty string: DOCKER_CGROUPS, DOCKER_OPTS, DOCKER_OPT_BIP, DOCKER_OPT_IPMASQ, DOCKER_OPT_MTU Jan 20 23:52:34.867348 dockerd[1987]: time="2026-01-20T23:52:34.867295800Z" level=info msg="Starting up" Jan 20 23:52:34.868401 dockerd[1987]: time="2026-01-20T23:52:34.868376516Z" level=info msg="OTEL tracing is not configured, using no-op tracer provider" Jan 20 23:52:34.878192 dockerd[1987]: time="2026-01-20T23:52:34.878024827Z" level=info msg="Creating a containerd client" address=/var/run/docker/libcontainerd/docker-containerd.sock timeout=1m0s Jan 20 23:52:34.920089 dockerd[1987]: time="2026-01-20T23:52:34.919836093Z" level=info msg="Loading containers: start." Jan 20 23:52:34.933473 kernel: Initializing XFRM netlink socket Jan 20 23:52:34.986000 audit[2038]: NETFILTER_CFG table=nat:2 family=2 entries=2 op=nft_register_chain pid=2038 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:34.986000 audit[2038]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffdd57c6f0 a2=0 a3=0 items=0 ppid=1987 pid=2038 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:34.986000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 23:52:34.988000 audit[2040]: NETFILTER_CFG table=filter:3 family=2 entries=2 op=nft_register_chain pid=2040 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:34.988000 audit[2040]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffc3a19610 a2=0 a3=0 items=0 ppid=1987 pid=2040 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:34.988000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 23:52:34.990000 audit[2042]: NETFILTER_CFG table=filter:4 family=2 entries=1 op=nft_register_chain pid=2042 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:34.990000 audit[2042]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdc5cda20 a2=0 a3=0 items=0 ppid=1987 pid=2042 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:34.990000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 23:52:34.992000 audit[2044]: NETFILTER_CFG table=filter:5 family=2 entries=1 op=nft_register_chain pid=2044 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:34.992000 audit[2044]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd6502d20 a2=0 a3=0 items=0 ppid=1987 pid=2044 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:34.992000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 20 23:52:34.994000 audit[2046]: NETFILTER_CFG table=filter:6 family=2 entries=1 op=nft_register_chain pid=2046 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:34.994000 audit[2046]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffe023380 a2=0 a3=0 items=0 ppid=1987 pid=2046 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:34.994000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 20 23:52:34.995000 audit[2048]: NETFILTER_CFG table=filter:7 family=2 entries=1 op=nft_register_chain pid=2048 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:34.995000 audit[2048]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff9517280 a2=0 a3=0 items=0 ppid=1987 pid=2048 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:34.995000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 23:52:34.997000 audit[2050]: NETFILTER_CFG table=filter:8 family=2 entries=1 op=nft_register_chain pid=2050 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:34.997000 audit[2050]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=fffff7829510 a2=0 a3=0 items=0 ppid=1987 pid=2050 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:34.997000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 23:52:34.999000 audit[2052]: NETFILTER_CFG table=nat:9 family=2 entries=2 op=nft_register_chain pid=2052 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:34.999000 audit[2052]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffdbb7f630 a2=0 a3=0 items=0 ppid=1987 pid=2052 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:34.999000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 20 23:52:35.041000 audit[2055]: NETFILTER_CFG table=nat:10 family=2 entries=2 op=nft_register_chain pid=2055 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:35.041000 audit[2055]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=472 a0=3 a1=ffffd0d619a0 a2=0 a3=0 items=0 ppid=1987 pid=2055 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:35.041000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003132372E302E302E302F38 Jan 20 23:52:35.043000 audit[2057]: NETFILTER_CFG table=filter:11 family=2 entries=2 op=nft_register_chain pid=2057 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:35.043000 audit[2057]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffd9397ff0 a2=0 a3=0 items=0 ppid=1987 pid=2057 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:35.043000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 20 23:52:35.045000 audit[2059]: NETFILTER_CFG table=filter:12 family=2 entries=1 op=nft_register_rule pid=2059 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:35.045000 audit[2059]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffd20d7b70 a2=0 a3=0 items=0 ppid=1987 pid=2059 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:35.045000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 20 23:52:35.047000 audit[2061]: NETFILTER_CFG table=filter:13 family=2 entries=1 op=nft_register_rule pid=2061 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:35.047000 audit[2061]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffee49ce30 a2=0 a3=0 items=0 ppid=1987 pid=2061 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:35.047000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 23:52:35.049000 audit[2063]: NETFILTER_CFG table=filter:14 family=2 entries=1 op=nft_register_rule pid=2063 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:35.049000 audit[2063]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=ffffdfd9b2a0 a2=0 a3=0 items=0 ppid=1987 pid=2063 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:35.049000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 20 23:52:35.094000 audit[2093]: NETFILTER_CFG table=nat:15 family=10 entries=2 op=nft_register_chain pid=2093 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:52:35.094000 audit[2093]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=116 a0=3 a1=ffffdc96a500 a2=0 a3=0 items=0 ppid=1987 pid=2093 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:35.094000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4E00444F434B4552 Jan 20 23:52:35.097000 audit[2095]: NETFILTER_CFG table=filter:16 family=10 entries=2 op=nft_register_chain pid=2095 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:52:35.097000 audit[2095]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=124 a0=3 a1=ffffcfb84760 a2=0 a3=0 items=0 ppid=1987 pid=2095 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:35.097000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B4552 Jan 20 23:52:35.099000 audit[2097]: NETFILTER_CFG table=filter:17 family=10 entries=1 op=nft_register_chain pid=2097 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:52:35.099000 audit[2097]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe3f7f710 a2=0 a3=0 items=0 ppid=1987 pid=2097 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:35.099000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D464F5257415244 Jan 20 23:52:35.101000 audit[2099]: NETFILTER_CFG table=filter:18 family=10 entries=1 op=nft_register_chain pid=2099 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:52:35.101000 audit[2099]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd44265c0 a2=0 a3=0 items=0 ppid=1987 pid=2099 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:35.101000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D425249444745 Jan 20 23:52:35.102000 audit[2101]: NETFILTER_CFG table=filter:19 family=10 entries=1 op=nft_register_chain pid=2101 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:52:35.102000 audit[2101]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe9140bb0 a2=0 a3=0 items=0 ppid=1987 pid=2101 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:35.102000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D4354 Jan 20 23:52:35.104000 audit[2103]: NETFILTER_CFG table=filter:20 family=10 entries=1 op=nft_register_chain pid=2103 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:52:35.104000 audit[2103]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffedc84610 a2=0 a3=0 items=0 ppid=1987 pid=2103 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:35.104000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 23:52:35.106000 audit[2105]: NETFILTER_CFG table=filter:21 family=10 entries=1 op=nft_register_chain pid=2105 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:52:35.106000 audit[2105]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=112 a0=3 a1=ffffeab24480 a2=0 a3=0 items=0 ppid=1987 pid=2105 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:35.106000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 23:52:35.108000 audit[2107]: NETFILTER_CFG table=nat:22 family=10 entries=2 op=nft_register_chain pid=2107 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:52:35.108000 audit[2107]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=384 a0=3 a1=ffffd4778d80 a2=0 a3=0 items=0 ppid=1987 pid=2107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:35.108000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D4100505245524F5554494E47002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B4552 Jan 20 23:52:35.110000 audit[2109]: NETFILTER_CFG table=nat:23 family=10 entries=2 op=nft_register_chain pid=2109 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:52:35.110000 audit[2109]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=484 a0=3 a1=ffffe1261270 a2=0 a3=0 items=0 ppid=1987 pid=2109 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:35.110000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D74006E6174002D41004F5554505554002D6D006164647274797065002D2D6473742D74797065004C4F43414C002D6A00444F434B45520000002D2D647374003A3A312F313238 Jan 20 23:52:35.112000 audit[2111]: NETFILTER_CFG table=filter:24 family=10 entries=2 op=nft_register_chain pid=2111 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:52:35.112000 audit[2111]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffc95c2540 a2=0 a3=0 items=0 ppid=1987 pid=2111 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:35.112000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D464F5257415244 Jan 20 23:52:35.114000 audit[2113]: NETFILTER_CFG table=filter:25 family=10 entries=1 op=nft_register_rule pid=2113 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:52:35.114000 audit[2113]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=236 a0=3 a1=ffffc60e6f90 a2=0 a3=0 items=0 ppid=1987 pid=2113 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:35.114000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D425249444745 Jan 20 23:52:35.116000 audit[2115]: NETFILTER_CFG table=filter:26 family=10 entries=1 op=nft_register_rule pid=2115 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:52:35.116000 audit[2115]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=248 a0=3 a1=ffffe151b220 a2=0 a3=0 items=0 ppid=1987 pid=2115 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:35.116000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D31 Jan 20 23:52:35.118000 audit[2117]: NETFILTER_CFG table=filter:27 family=10 entries=1 op=nft_register_rule pid=2117 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:52:35.118000 audit[2117]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=232 a0=3 a1=fffff2397f90 a2=0 a3=0 items=0 ppid=1987 pid=2117 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:35.118000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900444F434B45522D464F5257415244002D6A00444F434B45522D4354 Jan 20 23:52:35.124000 audit[2122]: NETFILTER_CFG table=filter:28 family=2 entries=1 op=nft_register_chain pid=2122 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:35.124000 audit[2122]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffd8130980 a2=0 a3=0 items=0 ppid=1987 pid=2122 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:35.124000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 20 23:52:35.126000 audit[2124]: NETFILTER_CFG table=filter:29 family=2 entries=1 op=nft_register_rule pid=2124 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:35.126000 audit[2124]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffd0956bf0 a2=0 a3=0 items=0 ppid=1987 pid=2124 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:35.126000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 20 23:52:35.127000 audit[2126]: NETFILTER_CFG table=filter:30 family=2 entries=1 op=nft_register_rule pid=2126 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:35.127000 audit[2126]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffc1d5e530 a2=0 a3=0 items=0 ppid=1987 pid=2126 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:35.127000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 20 23:52:35.129000 audit[2128]: NETFILTER_CFG table=filter:31 family=10 entries=1 op=nft_register_chain pid=2128 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:52:35.129000 audit[2128]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffda8c8a50 a2=0 a3=0 items=0 ppid=1987 pid=2128 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:35.129000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D740066696C746572002D4E00444F434B45522D55534552 Jan 20 23:52:35.131000 audit[2130]: NETFILTER_CFG table=filter:32 family=10 entries=1 op=nft_register_rule pid=2130 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:52:35.131000 audit[2130]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=212 a0=3 a1=ffffe3e9d3d0 a2=0 a3=0 items=0 ppid=1987 pid=2130 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:35.131000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4100444F434B45522D55534552002D6A0052455455524E Jan 20 23:52:35.133000 audit[2132]: NETFILTER_CFG table=filter:33 family=10 entries=1 op=nft_register_rule pid=2132 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:52:35.133000 audit[2132]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=224 a0=3 a1=ffffd0beef30 a2=0 a3=0 items=0 ppid=1987 pid=2132 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:35.133000 audit: PROCTITLE proctitle=2F7573722F62696E2F6970367461626C6573002D2D77616974002D4900464F5257415244002D6A00444F434B45522D55534552 Jan 20 23:52:35.156000 audit[2137]: NETFILTER_CFG table=nat:34 family=2 entries=2 op=nft_register_chain pid=2137 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:35.156000 audit[2137]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=520 a0=3 a1=ffffe4c57470 a2=0 a3=0 items=0 ppid=1987 pid=2137 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:35.156000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900504F5354524F5554494E47002D73003137322E31372E302E302F31360000002D6F00646F636B657230002D6A004D415351554552414445 Jan 20 23:52:35.159000 audit[2139]: NETFILTER_CFG table=nat:35 family=2 entries=1 op=nft_register_rule pid=2139 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:35.159000 audit[2139]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=288 a0=3 a1=ffffe50310a0 a2=0 a3=0 items=0 ppid=1987 pid=2139 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:35.159000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D74006E6174002D4900444F434B4552002D6900646F636B657230002D6A0052455455524E Jan 20 23:52:35.166000 audit[2147]: NETFILTER_CFG table=filter:36 family=2 entries=1 op=nft_register_rule pid=2147 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:35.166000 audit[2147]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=300 a0=3 a1=ffffd19c3830 a2=0 a3=0 items=0 ppid=1987 pid=2147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:35.166000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D464F5257415244002D6900646F636B657230002D6A00414343455054 Jan 20 23:52:35.181000 audit[2153]: NETFILTER_CFG table=filter:37 family=2 entries=1 op=nft_register_rule pid=2153 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:35.181000 audit[2153]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=376 a0=3 a1=ffffd2ed9210 a2=0 a3=0 items=0 ppid=1987 pid=2153 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:35.181000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45520000002D6900646F636B657230002D6F00646F636B657230002D6A0044524F50 Jan 20 23:52:35.183000 audit[2155]: NETFILTER_CFG table=filter:38 family=2 entries=1 op=nft_register_rule pid=2155 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:35.183000 audit[2155]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=512 a0=3 a1=fffff7540aa0 a2=0 a3=0 items=0 ppid=1987 pid=2155 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:35.183000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D4354002D6F00646F636B657230002D6D00636F6E6E747261636B002D2D637473746174650052454C415445442C45535441424C4953484544002D6A00414343455054 Jan 20 23:52:35.185000 audit[2157]: NETFILTER_CFG table=filter:39 family=2 entries=1 op=nft_register_rule pid=2157 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:35.185000 audit[2157]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffd3545d10 a2=0 a3=0 items=0 ppid=1987 pid=2157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:35.185000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D425249444745002D6F00646F636B657230002D6A00444F434B4552 Jan 20 23:52:35.187000 audit[2159]: NETFILTER_CFG table=filter:40 family=2 entries=1 op=nft_register_rule pid=2159 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:35.187000 audit[2159]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=428 a0=3 a1=ffffd67cc0d0 a2=0 a3=0 items=0 ppid=1987 pid=2159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:35.187000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4100444F434B45522D49534F4C4154494F4E2D53544147452D31002D6900646F636B6572300000002D6F00646F636B657230002D6A00444F434B45522D49534F4C4154494F4E2D53544147452D32 Jan 20 23:52:35.189000 audit[2161]: NETFILTER_CFG table=filter:41 family=2 entries=1 op=nft_register_rule pid=2161 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:35.189000 audit[2161]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=312 a0=3 a1=ffffea4ddf30 a2=0 a3=0 items=0 ppid=1987 pid=2161 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:35.189000 audit: PROCTITLE proctitle=2F7573722F62696E2F69707461626C6573002D2D77616974002D740066696C746572002D4900444F434B45522D49534F4C4154494F4E2D53544147452D32002D6F00646F636B657230002D6A0044524F50 Jan 20 23:52:35.191091 systemd-networkd[1586]: docker0: Link UP Jan 20 23:52:35.196498 dockerd[1987]: time="2026-01-20T23:52:35.196446130Z" level=info msg="Loading containers: done." Jan 20 23:52:35.209010 systemd[1]: var-lib-docker-overlay2-opaque\x2dbug\x2dcheck504755945-merged.mount: Deactivated successfully. Jan 20 23:52:35.219694 dockerd[1987]: time="2026-01-20T23:52:35.219637570Z" level=warning msg="Not using native diff for overlay2, this may cause degraded performance for building images: kernel has CONFIG_OVERLAY_FS_REDIRECT_DIR enabled" storage-driver=overlay2 Jan 20 23:52:35.219965 dockerd[1987]: time="2026-01-20T23:52:35.219721492Z" level=info msg="Docker daemon" commit=6430e49a55babd9b8f4d08e70ecb2b68900770fe containerd-snapshotter=false storage-driver=overlay2 version=28.0.4 Jan 20 23:52:35.219965 dockerd[1987]: time="2026-01-20T23:52:35.219880094Z" level=info msg="Initializing buildkit" Jan 20 23:52:35.252067 dockerd[1987]: time="2026-01-20T23:52:35.251972568Z" level=info msg="Completed buildkit initialization" Jan 20 23:52:35.259287 dockerd[1987]: time="2026-01-20T23:52:35.259234654Z" level=info msg="Daemon has completed initialization" Jan 20 23:52:35.259500 dockerd[1987]: time="2026-01-20T23:52:35.259290695Z" level=info msg="API listen on /run/docker.sock" Jan 20 23:52:35.259838 systemd[1]: Started docker.service - Docker Application Container Engine. Jan 20 23:52:35.258000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=docker comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:36.572177 containerd[1670]: time="2026-01-20T23:52:36.572106826Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\"" Jan 20 23:52:37.354413 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3041651635.mount: Deactivated successfully. Jan 20 23:52:38.312444 containerd[1670]: time="2026-01-20T23:52:38.311762313Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:52:38.312905 containerd[1670]: time="2026-01-20T23:52:38.312847436Z" level=info msg="stop pulling image registry.k8s.io/kube-apiserver:v1.33.7: active requests=0, bytes read=25791094" Jan 20 23:52:38.313796 containerd[1670]: time="2026-01-20T23:52:38.313767519Z" level=info msg="ImageCreate event name:\"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:52:38.317654 containerd[1670]: time="2026-01-20T23:52:38.317621770Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:52:38.319663 containerd[1670]: time="2026-01-20T23:52:38.319622135Z" level=info msg="Pulled image \"registry.k8s.io/kube-apiserver:v1.33.7\" with image id \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\", repo tag \"registry.k8s.io/kube-apiserver:v1.33.7\", repo digest \"registry.k8s.io/kube-apiserver@sha256:9585226cb85d1dc0f0ef5f7a75f04e4bc91ddd82de249533bd293aa3cf958dab\", size \"27383880\" in 1.747472349s" Jan 20 23:52:38.319765 containerd[1670]: time="2026-01-20T23:52:38.319750336Z" level=info msg="PullImage \"registry.k8s.io/kube-apiserver:v1.33.7\" returns image reference \"sha256:6d7bc8e445519fe4d49eee834f33f3e165eef4d3c0919ba08c67cdf8db905b7e\"" Jan 20 23:52:38.321189 containerd[1670]: time="2026-01-20T23:52:38.321154740Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\"" Jan 20 23:52:39.426255 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 3. Jan 20 23:52:39.427656 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 23:52:39.583607 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 23:52:39.582000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:39.587043 kernel: kauditd_printk_skb: 132 callbacks suppressed Jan 20 23:52:39.587107 kernel: audit: type=1130 audit(1768953159.582:288): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:39.587981 (kubelet)[2269]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 23:52:39.627714 kubelet[2269]: E0120 23:52:39.627659 2269 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 23:52:39.630449 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 23:52:39.630591 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 23:52:39.629000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 23:52:39.631533 systemd[1]: kubelet.service: Consumed 143ms CPU time, 107.4M memory peak. Jan 20 23:52:39.634487 kernel: audit: type=1131 audit(1768953159.629:289): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 23:52:39.960860 containerd[1670]: time="2026-01-20T23:52:39.960743308Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:52:39.962935 containerd[1670]: time="2026-01-20T23:52:39.962875914Z" level=info msg="stop pulling image registry.k8s.io/kube-controller-manager:v1.33.7: active requests=0, bytes read=23544927" Jan 20 23:52:39.964859 containerd[1670]: time="2026-01-20T23:52:39.964817280Z" level=info msg="ImageCreate event name:\"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:52:39.968447 containerd[1670]: time="2026-01-20T23:52:39.968415810Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:52:39.969519 containerd[1670]: time="2026-01-20T23:52:39.969475573Z" level=info msg="Pulled image \"registry.k8s.io/kube-controller-manager:v1.33.7\" with image id \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\", repo tag \"registry.k8s.io/kube-controller-manager:v1.33.7\", repo digest \"registry.k8s.io/kube-controller-manager@sha256:f69d77ca0626b5a4b7b432c18de0952941181db7341c80eb89731f46d1d0c230\", size \"25137562\" in 1.648050433s" Jan 20 23:52:39.969588 containerd[1670]: time="2026-01-20T23:52:39.969526533Z" level=info msg="PullImage \"registry.k8s.io/kube-controller-manager:v1.33.7\" returns image reference \"sha256:a94595d0240bcc5e538b4b33bbc890512a731425be69643cbee284072f7d8f64\"" Jan 20 23:52:39.970351 containerd[1670]: time="2026-01-20T23:52:39.970268655Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\"" Jan 20 23:52:41.214914 containerd[1670]: time="2026-01-20T23:52:41.214846250Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:52:41.216422 containerd[1670]: time="2026-01-20T23:52:41.216360494Z" level=info msg="stop pulling image registry.k8s.io/kube-scheduler:v1.33.7: active requests=0, bytes read=18289931" Jan 20 23:52:41.217374 containerd[1670]: time="2026-01-20T23:52:41.217319177Z" level=info msg="ImageCreate event name:\"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:52:41.220406 containerd[1670]: time="2026-01-20T23:52:41.220355866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:52:41.221441 containerd[1670]: time="2026-01-20T23:52:41.221304748Z" level=info msg="Pulled image \"registry.k8s.io/kube-scheduler:v1.33.7\" with image id \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\", repo tag \"registry.k8s.io/kube-scheduler:v1.33.7\", repo digest \"registry.k8s.io/kube-scheduler@sha256:21bda321d8b4d48eb059fbc1593203d55d8b3bc7acd0584e04e55504796d78d0\", size \"19882566\" in 1.250869972s" Jan 20 23:52:41.221441 containerd[1670]: time="2026-01-20T23:52:41.221339309Z" level=info msg="PullImage \"registry.k8s.io/kube-scheduler:v1.33.7\" returns image reference \"sha256:94005b6be50f054c8a4ef3f0d6976644e8b3c6a8bf15a9e8a2eeac3e8331b010\"" Jan 20 23:52:41.222043 containerd[1670]: time="2026-01-20T23:52:41.221768070Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\"" Jan 20 23:52:42.137017 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3827426253.mount: Deactivated successfully. Jan 20 23:52:42.630167 containerd[1670]: time="2026-01-20T23:52:42.629506051Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy:v1.33.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:52:42.630604 containerd[1670]: time="2026-01-20T23:52:42.630561294Z" level=info msg="stop pulling image registry.k8s.io/kube-proxy:v1.33.7: active requests=0, bytes read=18413667" Jan 20 23:52:42.631955 containerd[1670]: time="2026-01-20T23:52:42.631893177Z" level=info msg="ImageCreate event name:\"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:52:42.634097 containerd[1670]: time="2026-01-20T23:52:42.634053984Z" level=info msg="ImageCreate event name:\"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:52:42.634951 containerd[1670]: time="2026-01-20T23:52:42.634914306Z" level=info msg="Pulled image \"registry.k8s.io/kube-proxy:v1.33.7\" with image id \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\", repo tag \"registry.k8s.io/kube-proxy:v1.33.7\", repo digest \"registry.k8s.io/kube-proxy@sha256:ec25702b19026e9c0d339bc1c3bd231435a59f28b5fccb21e1b1078a357380f5\", size \"28257692\" in 1.413120556s" Jan 20 23:52:42.634990 containerd[1670]: time="2026-01-20T23:52:42.634952106Z" level=info msg="PullImage \"registry.k8s.io/kube-proxy:v1.33.7\" returns image reference \"sha256:78ccb937011a53894db229033fd54e237d478ec85315f8b08e5dcaa0f737111b\"" Jan 20 23:52:42.635563 containerd[1670]: time="2026-01-20T23:52:42.635538588Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\"" Jan 20 23:52:43.271442 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount301320817.mount: Deactivated successfully. Jan 20 23:52:43.834280 containerd[1670]: time="2026-01-20T23:52:43.833566170Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns:v1.12.0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:52:43.835080 containerd[1670]: time="2026-01-20T23:52:43.835003734Z" level=info msg="stop pulling image registry.k8s.io/coredns/coredns:v1.12.0: active requests=0, bytes read=18338344" Jan 20 23:52:43.835863 containerd[1670]: time="2026-01-20T23:52:43.835827736Z" level=info msg="ImageCreate event name:\"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:52:43.839504 containerd[1670]: time="2026-01-20T23:52:43.839472866Z" level=info msg="ImageCreate event name:\"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:52:43.841597 containerd[1670]: time="2026-01-20T23:52:43.841556552Z" level=info msg="Pulled image \"registry.k8s.io/coredns/coredns:v1.12.0\" with image id \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\", repo tag \"registry.k8s.io/coredns/coredns:v1.12.0\", repo digest \"registry.k8s.io/coredns/coredns@sha256:40384aa1f5ea6bfdc77997d243aec73da05f27aed0c5e9d65bfa98933c519d97\", size \"19148915\" in 1.205985364s" Jan 20 23:52:43.841597 containerd[1670]: time="2026-01-20T23:52:43.841592272Z" level=info msg="PullImage \"registry.k8s.io/coredns/coredns:v1.12.0\" returns image reference \"sha256:f72407be9e08c3a1b29a88318cbfee87b9f2da489f84015a5090b1e386e4dbc1\"" Jan 20 23:52:43.842253 containerd[1670]: time="2026-01-20T23:52:43.842020674Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\"" Jan 20 23:52:44.366186 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount2845752566.mount: Deactivated successfully. Jan 20 23:52:44.374841 containerd[1670]: time="2026-01-20T23:52:44.374785515Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause:3.10\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 23:52:44.376921 containerd[1670]: time="2026-01-20T23:52:44.376856481Z" level=info msg="stop pulling image registry.k8s.io/pause:3.10: active requests=0, bytes read=0" Jan 20 23:52:44.379108 containerd[1670]: time="2026-01-20T23:52:44.379057368Z" level=info msg="ImageCreate event name:\"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 23:52:44.382199 containerd[1670]: time="2026-01-20T23:52:44.382149056Z" level=info msg="ImageCreate event name:\"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"} labels:{key:\"io.cri-containerd.pinned\" value:\"pinned\"}" Jan 20 23:52:44.383315 containerd[1670]: time="2026-01-20T23:52:44.383274220Z" level=info msg="Pulled image \"registry.k8s.io/pause:3.10\" with image id \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\", repo tag \"registry.k8s.io/pause:3.10\", repo digest \"registry.k8s.io/pause@sha256:ee6521f290b2168b6e0935a181d4cff9be1ac3f505666ef0e3c98fae8199917a\", size \"267933\" in 541.226786ms" Jan 20 23:52:44.383315 containerd[1670]: time="2026-01-20T23:52:44.383306340Z" level=info msg="PullImage \"registry.k8s.io/pause:3.10\" returns image reference \"sha256:afb61768ce381961ca0beff95337601f29dc70ff3ed14e5e4b3e5699057e6aa8\"" Jan 20 23:52:44.383846 containerd[1670]: time="2026-01-20T23:52:44.383809741Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\"" Jan 20 23:52:44.911276 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount1343749261.mount: Deactivated successfully. Jan 20 23:52:46.984665 containerd[1670]: time="2026-01-20T23:52:46.984559840Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd:3.5.21-0\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:52:46.985998 containerd[1670]: time="2026-01-20T23:52:46.985920964Z" level=info msg="stop pulling image registry.k8s.io/etcd:3.5.21-0: active requests=0, bytes read=57926377" Jan 20 23:52:46.987519 containerd[1670]: time="2026-01-20T23:52:46.987487048Z" level=info msg="ImageCreate event name:\"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:52:46.992189 containerd[1670]: time="2026-01-20T23:52:46.992132941Z" level=info msg="ImageCreate event name:\"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:52:46.993246 containerd[1670]: time="2026-01-20T23:52:46.993200104Z" level=info msg="Pulled image \"registry.k8s.io/etcd:3.5.21-0\" with image id \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\", repo tag \"registry.k8s.io/etcd:3.5.21-0\", repo digest \"registry.k8s.io/etcd@sha256:d58c035df557080a27387d687092e3fc2b64c6d0e3162dc51453a115f847d121\", size \"70026017\" in 2.609354563s" Jan 20 23:52:46.993246 containerd[1670]: time="2026-01-20T23:52:46.993237184Z" level=info msg="PullImage \"registry.k8s.io/etcd:3.5.21-0\" returns image reference \"sha256:31747a36ce712f0bf61b50a0c06e99768522025e7b8daedd6dc63d1ae84837b5\"" Jan 20 23:52:49.676434 systemd[1]: kubelet.service: Scheduled restart job, restart counter is at 4. Jan 20 23:52:49.677942 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 23:52:49.805264 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 23:52:49.803000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:49.814460 kernel: audit: type=1130 audit(1768953169.803:290): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:49.812058 (kubelet)[2436]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS, KUBELET_KUBEADM_ARGS Jan 20 23:52:49.846887 kubelet[2436]: E0120 23:52:49.846830 2436 run.go:72] "command failed" err="failed to load kubelet config file, path: /var/lib/kubelet/config.yaml, error: failed to load Kubelet config file /var/lib/kubelet/config.yaml, error failed to read kubelet config file \"/var/lib/kubelet/config.yaml\", error: open /var/lib/kubelet/config.yaml: no such file or directory" Jan 20 23:52:49.849606 systemd[1]: kubelet.service: Main process exited, code=exited, status=1/FAILURE Jan 20 23:52:49.849744 systemd[1]: kubelet.service: Failed with result 'exit-code'. Jan 20 23:52:49.848000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 23:52:49.850111 systemd[1]: kubelet.service: Consumed 134ms CPU time, 106.4M memory peak. Jan 20 23:52:49.853513 kernel: audit: type=1131 audit(1768953169.848:291): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 23:52:51.631609 update_engine[1652]: I20260120 23:52:51.631212 1652 update_attempter.cc:509] Updating boot flags... Jan 20 23:52:54.126000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:54.127610 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 23:52:54.127791 systemd[1]: kubelet.service: Consumed 134ms CPU time, 106.4M memory peak. Jan 20 23:52:54.130691 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 23:52:54.126000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:54.133655 kernel: audit: type=1130 audit(1768953174.126:292): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:54.133723 kernel: audit: type=1131 audit(1768953174.126:293): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:54.150611 systemd[1]: Reload requested from client PID 2467 ('systemctl') (unit session-10.scope)... Jan 20 23:52:54.150627 systemd[1]: Reloading... Jan 20 23:52:54.224489 zram_generator::config[2514]: No configuration found. Jan 20 23:52:54.403114 systemd[1]: Reloading finished in 252 ms. Jan 20 23:52:54.421497 kernel: audit: type=1334 audit(1768953174.418:294): prog-id=63 op=LOAD Jan 20 23:52:54.421585 kernel: audit: type=1334 audit(1768953174.418:295): prog-id=55 op=UNLOAD Jan 20 23:52:54.418000 audit: BPF prog-id=63 op=LOAD Jan 20 23:52:54.418000 audit: BPF prog-id=55 op=UNLOAD Jan 20 23:52:54.419000 audit: BPF prog-id=64 op=LOAD Jan 20 23:52:54.419000 audit: BPF prog-id=65 op=LOAD Jan 20 23:52:54.423935 kernel: audit: type=1334 audit(1768953174.419:296): prog-id=64 op=LOAD Jan 20 23:52:54.423984 kernel: audit: type=1334 audit(1768953174.419:297): prog-id=65 op=LOAD Jan 20 23:52:54.424106 kernel: audit: type=1334 audit(1768953174.419:298): prog-id=56 op=UNLOAD Jan 20 23:52:54.419000 audit: BPF prog-id=56 op=UNLOAD Jan 20 23:52:54.424645 kernel: audit: type=1334 audit(1768953174.419:299): prog-id=57 op=UNLOAD Jan 20 23:52:54.419000 audit: BPF prog-id=57 op=UNLOAD Jan 20 23:52:54.421000 audit: BPF prog-id=66 op=LOAD Jan 20 23:52:54.424000 audit: BPF prog-id=67 op=LOAD Jan 20 23:52:54.424000 audit: BPF prog-id=50 op=UNLOAD Jan 20 23:52:54.424000 audit: BPF prog-id=51 op=UNLOAD Jan 20 23:52:54.424000 audit: BPF prog-id=68 op=LOAD Jan 20 23:52:54.424000 audit: BPF prog-id=58 op=UNLOAD Jan 20 23:52:54.425000 audit: BPF prog-id=69 op=LOAD Jan 20 23:52:54.425000 audit: BPF prog-id=43 op=UNLOAD Jan 20 23:52:54.425000 audit: BPF prog-id=70 op=LOAD Jan 20 23:52:54.425000 audit: BPF prog-id=71 op=LOAD Jan 20 23:52:54.425000 audit: BPF prog-id=44 op=UNLOAD Jan 20 23:52:54.425000 audit: BPF prog-id=45 op=UNLOAD Jan 20 23:52:54.426000 audit: BPF prog-id=72 op=LOAD Jan 20 23:52:54.426000 audit: BPF prog-id=46 op=UNLOAD Jan 20 23:52:54.427000 audit: BPF prog-id=73 op=LOAD Jan 20 23:52:54.427000 audit: BPF prog-id=60 op=UNLOAD Jan 20 23:52:54.427000 audit: BPF prog-id=74 op=LOAD Jan 20 23:52:54.427000 audit: BPF prog-id=75 op=LOAD Jan 20 23:52:54.427000 audit: BPF prog-id=61 op=UNLOAD Jan 20 23:52:54.427000 audit: BPF prog-id=62 op=UNLOAD Jan 20 23:52:54.428000 audit: BPF prog-id=76 op=LOAD Jan 20 23:52:54.428000 audit: BPF prog-id=52 op=UNLOAD Jan 20 23:52:54.428000 audit: BPF prog-id=77 op=LOAD Jan 20 23:52:54.428000 audit: BPF prog-id=78 op=LOAD Jan 20 23:52:54.428000 audit: BPF prog-id=53 op=UNLOAD Jan 20 23:52:54.428000 audit: BPF prog-id=54 op=UNLOAD Jan 20 23:52:54.429000 audit: BPF prog-id=79 op=LOAD Jan 20 23:52:54.429000 audit: BPF prog-id=59 op=UNLOAD Jan 20 23:52:54.430000 audit: BPF prog-id=80 op=LOAD Jan 20 23:52:54.430000 audit: BPF prog-id=47 op=UNLOAD Jan 20 23:52:54.430000 audit: BPF prog-id=81 op=LOAD Jan 20 23:52:54.430000 audit: BPF prog-id=82 op=LOAD Jan 20 23:52:54.430000 audit: BPF prog-id=48 op=UNLOAD Jan 20 23:52:54.430000 audit: BPF prog-id=49 op=UNLOAD Jan 20 23:52:54.451357 systemd[1]: kubelet.service: Control process exited, code=killed, status=15/TERM Jan 20 23:52:54.451443 systemd[1]: kubelet.service: Failed with result 'signal'. Jan 20 23:52:54.450000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=failed' Jan 20 23:52:54.451768 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 23:52:54.451829 systemd[1]: kubelet.service: Consumed 93ms CPU time, 95.4M memory peak. Jan 20 23:52:54.453569 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 23:52:54.579413 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 23:52:54.579000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:54.585446 (kubelet)[2562]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 23:52:54.615010 kubelet[2562]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 23:52:54.615010 kubelet[2562]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 23:52:54.615010 kubelet[2562]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 23:52:54.615348 kubelet[2562]: I0120 23:52:54.615047 2562 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 23:52:55.252803 kubelet[2562]: I0120 23:52:55.252763 2562 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 20 23:52:55.252935 kubelet[2562]: I0120 23:52:55.252926 2562 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 23:52:55.253199 kubelet[2562]: I0120 23:52:55.253182 2562 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 23:52:55.288044 kubelet[2562]: E0120 23:52:55.287988 2562 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://10.0.2.209:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 10.0.2.209:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 20 23:52:55.289397 kubelet[2562]: I0120 23:52:55.289360 2562 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 23:52:55.299659 kubelet[2562]: I0120 23:52:55.299618 2562 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 23:52:55.302330 kubelet[2562]: I0120 23:52:55.302298 2562 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 23:52:55.304243 kubelet[2562]: I0120 23:52:55.304178 2562 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 23:52:55.304392 kubelet[2562]: I0120 23:52:55.304230 2562 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547-0-0-n-e5b472a427","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 23:52:55.304677 kubelet[2562]: I0120 23:52:55.304486 2562 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 23:52:55.304677 kubelet[2562]: I0120 23:52:55.304499 2562 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 23:52:55.305524 kubelet[2562]: I0120 23:52:55.305498 2562 state_mem.go:36] "Initialized new in-memory state store" Jan 20 23:52:55.315806 kubelet[2562]: I0120 23:52:55.315772 2562 kubelet.go:480] "Attempting to sync node with API server" Jan 20 23:52:55.315806 kubelet[2562]: I0120 23:52:55.315797 2562 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 23:52:55.316005 kubelet[2562]: I0120 23:52:55.315827 2562 kubelet.go:386] "Adding apiserver pod source" Jan 20 23:52:55.317882 kubelet[2562]: I0120 23:52:55.317413 2562 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 23:52:55.319061 kubelet[2562]: E0120 23:52:55.319020 2562 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://10.0.2.209:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 10.0.2.209:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 20 23:52:55.319349 kubelet[2562]: I0120 23:52:55.319319 2562 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 23:52:55.319735 kubelet[2562]: E0120 23:52:55.319684 2562 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://10.0.2.209:6443/api/v1/nodes?fieldSelector=metadata.name%3Dci-4547-0-0-n-e5b472a427&limit=500&resourceVersion=0\": dial tcp 10.0.2.209:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 20 23:52:55.320282 kubelet[2562]: I0120 23:52:55.320244 2562 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 23:52:55.320403 kubelet[2562]: W0120 23:52:55.320389 2562 probe.go:272] Flexvolume plugin directory at /opt/libexec/kubernetes/kubelet-plugins/volume/exec/ does not exist. Recreating. Jan 20 23:52:55.323233 kubelet[2562]: I0120 23:52:55.322947 2562 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 23:52:55.323233 kubelet[2562]: I0120 23:52:55.322989 2562 server.go:1289] "Started kubelet" Jan 20 23:52:55.325647 kubelet[2562]: I0120 23:52:55.325574 2562 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 23:52:55.325873 kubelet[2562]: I0120 23:52:55.325838 2562 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 23:52:55.325922 kubelet[2562]: I0120 23:52:55.325878 2562 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 23:52:55.325993 kubelet[2562]: I0120 23:52:55.325957 2562 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 23:52:55.327333 kubelet[2562]: I0120 23:52:55.327291 2562 server.go:317] "Adding debug handlers to kubelet server" Jan 20 23:52:55.329352 kubelet[2562]: I0120 23:52:55.329320 2562 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 23:52:55.329673 kubelet[2562]: I0120 23:52:55.329654 2562 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 23:52:55.329891 kubelet[2562]: E0120 23:52:55.329867 2562 kubelet_node_status.go:466] "Error getting the current node from lister" err="node \"ci-4547-0-0-n-e5b472a427\" not found" Jan 20 23:52:55.330161 kubelet[2562]: I0120 23:52:55.330144 2562 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 23:52:55.330210 kubelet[2562]: I0120 23:52:55.330198 2562 reconciler.go:26] "Reconciler: start to sync state" Jan 20 23:52:55.334474 kubelet[2562]: E0120 23:52:55.333821 2562 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.2.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-0-0-n-e5b472a427?timeout=10s\": dial tcp 10.0.2.209:6443: connect: connection refused" interval="200ms" Jan 20 23:52:55.334611 kubelet[2562]: I0120 23:52:55.334574 2562 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 23:52:55.334000 audit[2579]: NETFILTER_CFG table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2579 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:55.335950 kubelet[2562]: E0120 23:52:55.335922 2562 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://10.0.2.209:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 10.0.2.209:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 20 23:52:55.337112 kernel: kauditd_printk_skb: 36 callbacks suppressed Jan 20 23:52:55.337179 kernel: audit: type=1325 audit(1768953175.334:336): table=mangle:42 family=2 entries=2 op=nft_register_chain pid=2579 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:55.337204 kernel: audit: type=1300 audit(1768953175.334:336): arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff46165b0 a2=0 a3=0 items=0 ppid=2562 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.334000 audit[2579]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=fffff46165b0 a2=0 a3=0 items=0 ppid=2562 pid=2579 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.339803 kubelet[2562]: I0120 23:52:55.339749 2562 factory.go:223] Registration of the containerd container factory successfully Jan 20 23:52:55.339803 kubelet[2562]: I0120 23:52:55.339790 2562 factory.go:223] Registration of the systemd container factory successfully Jan 20 23:52:55.340450 kubelet[2562]: E0120 23:52:55.336431 2562 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://10.0.2.209:6443/api/v1/namespaces/default/events\": dial tcp 10.0.2.209:6443: connect: connection refused" event="&Event{ObjectMeta:{ci-4547-0-0-n-e5b472a427.188c958a676cb55a default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:ci-4547-0-0-n-e5b472a427,UID:ci-4547-0-0-n-e5b472a427,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:ci-4547-0-0-n-e5b472a427,},FirstTimestamp:2026-01-20 23:52:55.322965338 +0000 UTC m=+0.734437381,LastTimestamp:2026-01-20 23:52:55.322965338 +0000 UTC m=+0.734437381,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547-0-0-n-e5b472a427,}" Jan 20 23:52:55.334000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 23:52:55.341947 kubelet[2562]: E0120 23:52:55.341920 2562 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 23:52:55.342308 kernel: audit: type=1327 audit(1768953175.334:336): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 23:52:55.342355 kernel: audit: type=1325 audit(1768953175.335:337): table=filter:43 family=2 entries=1 op=nft_register_chain pid=2580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:55.335000 audit[2580]: NETFILTER_CFG table=filter:43 family=2 entries=1 op=nft_register_chain pid=2580 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:55.335000 audit[2580]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffca08b780 a2=0 a3=0 items=0 ppid=2562 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.347459 kernel: audit: type=1300 audit(1768953175.335:337): arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffca08b780 a2=0 a3=0 items=0 ppid=2562 pid=2580 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.347568 kernel: audit: type=1327 audit(1768953175.335:337): proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 23:52:55.335000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 23:52:55.340000 audit[2582]: NETFILTER_CFG table=filter:44 family=2 entries=2 op=nft_register_chain pid=2582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:55.350600 kernel: audit: type=1325 audit(1768953175.340:338): table=filter:44 family=2 entries=2 op=nft_register_chain pid=2582 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:55.350664 kernel: audit: type=1300 audit(1768953175.340:338): arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffdb9b0e10 a2=0 a3=0 items=0 ppid=2562 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.340000 audit[2582]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffdb9b0e10 a2=0 a3=0 items=0 ppid=2562 pid=2582 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.340000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 23:52:55.355094 kubelet[2562]: I0120 23:52:55.354795 2562 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 23:52:55.355094 kubelet[2562]: I0120 23:52:55.354814 2562 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 23:52:55.355094 kubelet[2562]: I0120 23:52:55.354829 2562 state_mem.go:36] "Initialized new in-memory state store" Jan 20 23:52:55.356511 kubelet[2562]: I0120 23:52:55.356470 2562 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 20 23:52:55.356917 kernel: audit: type=1327 audit(1768953175.340:338): proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 23:52:55.356953 kernel: audit: type=1325 audit(1768953175.342:339): table=filter:45 family=2 entries=2 op=nft_register_chain pid=2585 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:55.342000 audit[2585]: NETFILTER_CFG table=filter:45 family=2 entries=2 op=nft_register_chain pid=2585 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:55.357746 kubelet[2562]: I0120 23:52:55.357720 2562 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 20 23:52:55.357819 kubelet[2562]: I0120 23:52:55.357755 2562 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 20 23:52:55.357819 kubelet[2562]: I0120 23:52:55.357785 2562 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 23:52:55.357819 kubelet[2562]: I0120 23:52:55.357793 2562 kubelet.go:2436] "Starting kubelet main sync loop" Jan 20 23:52:55.357907 kubelet[2562]: E0120 23:52:55.357835 2562 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 23:52:55.342000 audit[2585]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=340 a0=3 a1=ffffcb1f0910 a2=0 a3=0 items=0 ppid=2562 pid=2585 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.342000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 23:52:55.355000 audit[2590]: NETFILTER_CFG table=filter:46 family=2 entries=1 op=nft_register_rule pid=2590 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:55.355000 audit[2590]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=924 a0=3 a1=ffffe3908010 a2=0 a3=0 items=0 ppid=2562 pid=2590 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.355000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D41004B5542452D4649524557414C4C002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E7400626C6F636B20696E636F6D696E67206C6F63616C6E657420636F6E6E656374696F6E73002D2D647374003132372E302E302E302F38 Jan 20 23:52:55.356000 audit[2592]: NETFILTER_CFG table=mangle:47 family=10 entries=2 op=nft_register_chain pid=2592 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:52:55.356000 audit[2592]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=136 a0=3 a1=ffffc51f6c70 a2=0 a3=0 items=0 ppid=2562 pid=2592 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.356000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D49505441424C45532D48494E54002D74006D616E676C65 Jan 20 23:52:55.357000 audit[2591]: NETFILTER_CFG table=mangle:48 family=2 entries=1 op=nft_register_chain pid=2591 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:55.357000 audit[2591]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe05b2d70 a2=0 a3=0 items=0 ppid=2562 pid=2591 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.357000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 20 23:52:55.358000 audit[2594]: NETFILTER_CFG table=mangle:49 family=10 entries=1 op=nft_register_chain pid=2594 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:52:55.358000 audit[2594]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdd739b80 a2=0 a3=0 items=0 ppid=2562 pid=2594 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.358000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006D616E676C65 Jan 20 23:52:55.360000 audit[2596]: NETFILTER_CFG table=nat:50 family=10 entries=1 op=nft_register_chain pid=2596 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:52:55.360000 audit[2596]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffed3c8c70 a2=0 a3=0 items=0 ppid=2562 pid=2596 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.360000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 20 23:52:55.360973 kubelet[2562]: E0120 23:52:55.360930 2562 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://10.0.2.209:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 10.0.2.209:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 20 23:52:55.360000 audit[2597]: NETFILTER_CFG table=nat:51 family=2 entries=1 op=nft_register_chain pid=2597 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:55.360000 audit[2597]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffe50d1280 a2=0 a3=0 items=0 ppid=2562 pid=2597 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.360000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D74006E6174 Jan 20 23:52:55.361707 kubelet[2562]: I0120 23:52:55.361690 2562 policy_none.go:49] "None policy: Start" Jan 20 23:52:55.361775 kubelet[2562]: I0120 23:52:55.361766 2562 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 23:52:55.361829 kubelet[2562]: I0120 23:52:55.361821 2562 state_mem.go:35] "Initializing new in-memory state store" Jan 20 23:52:55.362000 audit[2598]: NETFILTER_CFG table=filter:52 family=10 entries=1 op=nft_register_chain pid=2598 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:52:55.362000 audit[2598]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffc6a25ec0 a2=0 a3=0 items=0 ppid=2562 pid=2598 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.362000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 20 23:52:55.362000 audit[2599]: NETFILTER_CFG table=filter:53 family=2 entries=1 op=nft_register_chain pid=2599 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:52:55.362000 audit[2599]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffd0d32550 a2=0 a3=0 items=0 ppid=2562 pid=2599 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/bin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.362000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4B5542454C45542D43414E415259002D740066696C746572 Jan 20 23:52:55.367682 systemd[1]: Created slice kubepods.slice - libcontainer container kubepods.slice. Jan 20 23:52:55.382420 systemd[1]: Created slice kubepods-burstable.slice - libcontainer container kubepods-burstable.slice. Jan 20 23:52:55.385719 systemd[1]: Created slice kubepods-besteffort.slice - libcontainer container kubepods-besteffort.slice. Jan 20 23:52:55.396680 kubelet[2562]: E0120 23:52:55.396635 2562 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 23:52:55.396848 kubelet[2562]: I0120 23:52:55.396827 2562 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 23:52:55.396930 kubelet[2562]: I0120 23:52:55.396839 2562 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 23:52:55.397636 kubelet[2562]: I0120 23:52:55.397348 2562 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 23:52:55.398298 kubelet[2562]: E0120 23:52:55.398274 2562 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 23:52:55.398427 kubelet[2562]: E0120 23:52:55.398310 2562 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"ci-4547-0-0-n-e5b472a427\" not found" Jan 20 23:52:55.469979 systemd[1]: Created slice kubepods-burstable-pod647886ebc69bbf332a0c27d233bcf7d8.slice - libcontainer container kubepods-burstable-pod647886ebc69bbf332a0c27d233bcf7d8.slice. Jan 20 23:52:55.477323 kubelet[2562]: E0120 23:52:55.477259 2562 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-e5b472a427\" not found" node="ci-4547-0-0-n-e5b472a427" Jan 20 23:52:55.479087 systemd[1]: Created slice kubepods-burstable-pod86ddab5f390526bf72acefd3fbf95fc0.slice - libcontainer container kubepods-burstable-pod86ddab5f390526bf72acefd3fbf95fc0.slice. Jan 20 23:52:55.488013 kubelet[2562]: E0120 23:52:55.487979 2562 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-e5b472a427\" not found" node="ci-4547-0-0-n-e5b472a427" Jan 20 23:52:55.490563 systemd[1]: Created slice kubepods-burstable-podc12215e7a4d14fdc59c3b2255fa9fee0.slice - libcontainer container kubepods-burstable-podc12215e7a4d14fdc59c3b2255fa9fee0.slice. Jan 20 23:52:55.492451 kubelet[2562]: E0120 23:52:55.492426 2562 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-e5b472a427\" not found" node="ci-4547-0-0-n-e5b472a427" Jan 20 23:52:55.499034 kubelet[2562]: I0120 23:52:55.499008 2562 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-n-e5b472a427" Jan 20 23:52:55.499466 kubelet[2562]: E0120 23:52:55.499431 2562 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.2.209:6443/api/v1/nodes\": dial tcp 10.0.2.209:6443: connect: connection refused" node="ci-4547-0-0-n-e5b472a427" Jan 20 23:52:55.535713 kubelet[2562]: E0120 23:52:55.535592 2562 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.2.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-0-0-n-e5b472a427?timeout=10s\": dial tcp 10.0.2.209:6443: connect: connection refused" interval="400ms" Jan 20 23:52:55.630950 kubelet[2562]: I0120 23:52:55.630821 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/647886ebc69bbf332a0c27d233bcf7d8-flexvolume-dir\") pod \"kube-controller-manager-ci-4547-0-0-n-e5b472a427\" (UID: \"647886ebc69bbf332a0c27d233bcf7d8\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-e5b472a427" Jan 20 23:52:55.630950 kubelet[2562]: I0120 23:52:55.630863 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/647886ebc69bbf332a0c27d233bcf7d8-k8s-certs\") pod \"kube-controller-manager-ci-4547-0-0-n-e5b472a427\" (UID: \"647886ebc69bbf332a0c27d233bcf7d8\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-e5b472a427" Jan 20 23:52:55.630950 kubelet[2562]: I0120 23:52:55.630882 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/647886ebc69bbf332a0c27d233bcf7d8-kubeconfig\") pod \"kube-controller-manager-ci-4547-0-0-n-e5b472a427\" (UID: \"647886ebc69bbf332a0c27d233bcf7d8\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-e5b472a427" Jan 20 23:52:55.630950 kubelet[2562]: I0120 23:52:55.630905 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c12215e7a4d14fdc59c3b2255fa9fee0-ca-certs\") pod \"kube-apiserver-ci-4547-0-0-n-e5b472a427\" (UID: \"c12215e7a4d14fdc59c3b2255fa9fee0\") " pod="kube-system/kube-apiserver-ci-4547-0-0-n-e5b472a427" Jan 20 23:52:55.630950 kubelet[2562]: I0120 23:52:55.630922 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c12215e7a4d14fdc59c3b2255fa9fee0-k8s-certs\") pod \"kube-apiserver-ci-4547-0-0-n-e5b472a427\" (UID: \"c12215e7a4d14fdc59c3b2255fa9fee0\") " pod="kube-system/kube-apiserver-ci-4547-0-0-n-e5b472a427" Jan 20 23:52:55.631360 kubelet[2562]: I0120 23:52:55.631004 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/647886ebc69bbf332a0c27d233bcf7d8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547-0-0-n-e5b472a427\" (UID: \"647886ebc69bbf332a0c27d233bcf7d8\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-e5b472a427" Jan 20 23:52:55.631360 kubelet[2562]: I0120 23:52:55.631063 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/86ddab5f390526bf72acefd3fbf95fc0-kubeconfig\") pod \"kube-scheduler-ci-4547-0-0-n-e5b472a427\" (UID: \"86ddab5f390526bf72acefd3fbf95fc0\") " pod="kube-system/kube-scheduler-ci-4547-0-0-n-e5b472a427" Jan 20 23:52:55.631360 kubelet[2562]: I0120 23:52:55.631083 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c12215e7a4d14fdc59c3b2255fa9fee0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547-0-0-n-e5b472a427\" (UID: \"c12215e7a4d14fdc59c3b2255fa9fee0\") " pod="kube-system/kube-apiserver-ci-4547-0-0-n-e5b472a427" Jan 20 23:52:55.631360 kubelet[2562]: I0120 23:52:55.631102 2562 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/647886ebc69bbf332a0c27d233bcf7d8-ca-certs\") pod \"kube-controller-manager-ci-4547-0-0-n-e5b472a427\" (UID: \"647886ebc69bbf332a0c27d233bcf7d8\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-e5b472a427" Jan 20 23:52:55.701973 kubelet[2562]: I0120 23:52:55.701919 2562 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-n-e5b472a427" Jan 20 23:52:55.702258 kubelet[2562]: E0120 23:52:55.702217 2562 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.2.209:6443/api/v1/nodes\": dial tcp 10.0.2.209:6443: connect: connection refused" node="ci-4547-0-0-n-e5b472a427" Jan 20 23:52:55.778627 containerd[1670]: time="2026-01-20T23:52:55.778555201Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547-0-0-n-e5b472a427,Uid:647886ebc69bbf332a0c27d233bcf7d8,Namespace:kube-system,Attempt:0,}" Jan 20 23:52:55.789357 containerd[1670]: time="2026-01-20T23:52:55.789153151Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547-0-0-n-e5b472a427,Uid:86ddab5f390526bf72acefd3fbf95fc0,Namespace:kube-system,Attempt:0,}" Jan 20 23:52:55.793889 containerd[1670]: time="2026-01-20T23:52:55.793846244Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547-0-0-n-e5b472a427,Uid:c12215e7a4d14fdc59c3b2255fa9fee0,Namespace:kube-system,Attempt:0,}" Jan 20 23:52:55.802586 containerd[1670]: time="2026-01-20T23:52:55.802540269Z" level=info msg="connecting to shim 06bc1d9e68cf7fa2d7588a02a6add80af6a7507268975adcde6b53facd687b7a" address="unix:///run/containerd/s/cad77158e5571cd5da02ac3a46ad66df1e9ff9afaacb3604ec8322968ef95f59" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:52:55.830523 containerd[1670]: time="2026-01-20T23:52:55.830070468Z" level=info msg="connecting to shim 93bb9897e0a7b5a8e381cd2cb1281831cb5f12ebf87280ce7786963a4392c7c2" address="unix:///run/containerd/s/77750dce380f3365929df256a0617d011d006192a7e7b031fdd8484e3499a26e" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:52:55.833716 systemd[1]: Started cri-containerd-06bc1d9e68cf7fa2d7588a02a6add80af6a7507268975adcde6b53facd687b7a.scope - libcontainer container 06bc1d9e68cf7fa2d7588a02a6add80af6a7507268975adcde6b53facd687b7a. Jan 20 23:52:55.835787 containerd[1670]: time="2026-01-20T23:52:55.835740484Z" level=info msg="connecting to shim 24d1ee4a681145c76edb8263e6c7e7dd5e383398afa82f2e72d9bf9d7f394312" address="unix:///run/containerd/s/fad34358c7a7bf819b036e98077d9ddea1069473dc342dbee546fb2f4b967850" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:52:55.848000 audit: BPF prog-id=83 op=LOAD Jan 20 23:52:55.848000 audit: BPF prog-id=84 op=LOAD Jan 20 23:52:55.848000 audit[2619]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186180 a2=98 a3=0 items=0 ppid=2608 pid=2619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.848000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036626331643965363863663766613264373538386130326136616464 Jan 20 23:52:55.849000 audit: BPF prog-id=84 op=UNLOAD Jan 20 23:52:55.849000 audit[2619]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2608 pid=2619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036626331643965363863663766613264373538386130326136616464 Jan 20 23:52:55.849000 audit: BPF prog-id=85 op=LOAD Jan 20 23:52:55.849000 audit[2619]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001863e8 a2=98 a3=0 items=0 ppid=2608 pid=2619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036626331643965363863663766613264373538386130326136616464 Jan 20 23:52:55.849000 audit: BPF prog-id=86 op=LOAD Jan 20 23:52:55.849000 audit[2619]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000186168 a2=98 a3=0 items=0 ppid=2608 pid=2619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036626331643965363863663766613264373538386130326136616464 Jan 20 23:52:55.849000 audit: BPF prog-id=86 op=UNLOAD Jan 20 23:52:55.849000 audit[2619]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2608 pid=2619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036626331643965363863663766613264373538386130326136616464 Jan 20 23:52:55.849000 audit: BPF prog-id=85 op=UNLOAD Jan 20 23:52:55.849000 audit[2619]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2608 pid=2619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.849000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036626331643965363863663766613264373538386130326136616464 Jan 20 23:52:55.850000 audit: BPF prog-id=87 op=LOAD Jan 20 23:52:55.850000 audit[2619]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000186648 a2=98 a3=0 items=0 ppid=2608 pid=2619 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.850000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3036626331643965363863663766613264373538386130326136616464 Jan 20 23:52:55.861670 systemd[1]: Started cri-containerd-93bb9897e0a7b5a8e381cd2cb1281831cb5f12ebf87280ce7786963a4392c7c2.scope - libcontainer container 93bb9897e0a7b5a8e381cd2cb1281831cb5f12ebf87280ce7786963a4392c7c2. Jan 20 23:52:55.865645 systemd[1]: Started cri-containerd-24d1ee4a681145c76edb8263e6c7e7dd5e383398afa82f2e72d9bf9d7f394312.scope - libcontainer container 24d1ee4a681145c76edb8263e6c7e7dd5e383398afa82f2e72d9bf9d7f394312. Jan 20 23:52:55.875000 audit: BPF prog-id=88 op=LOAD Jan 20 23:52:55.876000 audit: BPF prog-id=89 op=LOAD Jan 20 23:52:55.876000 audit[2665]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2643 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933626239383937653061376235613865333831636432636231323831 Jan 20 23:52:55.876000 audit: BPF prog-id=89 op=UNLOAD Jan 20 23:52:55.876000 audit[2665]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2643 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.876000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933626239383937653061376235613865333831636432636231323831 Jan 20 23:52:55.877000 audit: BPF prog-id=90 op=LOAD Jan 20 23:52:55.877000 audit[2665]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2643 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933626239383937653061376235613865333831636432636231323831 Jan 20 23:52:55.877000 audit: BPF prog-id=91 op=LOAD Jan 20 23:52:55.877000 audit[2665]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2643 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933626239383937653061376235613865333831636432636231323831 Jan 20 23:52:55.877000 audit: BPF prog-id=91 op=UNLOAD Jan 20 23:52:55.877000 audit[2665]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2643 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933626239383937653061376235613865333831636432636231323831 Jan 20 23:52:55.877000 audit: BPF prog-id=90 op=UNLOAD Jan 20 23:52:55.877000 audit[2665]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2643 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933626239383937653061376235613865333831636432636231323831 Jan 20 23:52:55.877000 audit: BPF prog-id=92 op=LOAD Jan 20 23:52:55.877000 audit[2665]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2643 pid=2665 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.877000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3933626239383937653061376235613865333831636432636231323831 Jan 20 23:52:55.880000 audit: BPF prog-id=93 op=LOAD Jan 20 23:52:55.882000 audit: BPF prog-id=94 op=LOAD Jan 20 23:52:55.882766 containerd[1670]: time="2026-01-20T23:52:55.882564138Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-controller-manager-ci-4547-0-0-n-e5b472a427,Uid:647886ebc69bbf332a0c27d233bcf7d8,Namespace:kube-system,Attempt:0,} returns sandbox id \"06bc1d9e68cf7fa2d7588a02a6add80af6a7507268975adcde6b53facd687b7a\"" Jan 20 23:52:55.882000 audit[2681]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2652 pid=2681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.882000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234643165653461363831313435633736656462383236336536633765 Jan 20 23:52:55.882000 audit: BPF prog-id=94 op=UNLOAD Jan 20 23:52:55.882000 audit[2681]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2652 pid=2681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.882000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234643165653461363831313435633736656462383236336536633765 Jan 20 23:52:55.882000 audit: BPF prog-id=95 op=LOAD Jan 20 23:52:55.882000 audit[2681]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2652 pid=2681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.882000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234643165653461363831313435633736656462383236336536633765 Jan 20 23:52:55.882000 audit: BPF prog-id=96 op=LOAD Jan 20 23:52:55.882000 audit[2681]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2652 pid=2681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.882000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234643165653461363831313435633736656462383236336536633765 Jan 20 23:52:55.882000 audit: BPF prog-id=96 op=UNLOAD Jan 20 23:52:55.882000 audit[2681]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=2652 pid=2681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.882000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234643165653461363831313435633736656462383236336536633765 Jan 20 23:52:55.882000 audit: BPF prog-id=95 op=UNLOAD Jan 20 23:52:55.882000 audit[2681]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=2652 pid=2681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.882000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234643165653461363831313435633736656462383236336536633765 Jan 20 23:52:55.882000 audit: BPF prog-id=97 op=LOAD Jan 20 23:52:55.882000 audit[2681]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2652 pid=2681 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.882000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3234643165653461363831313435633736656462383236336536633765 Jan 20 23:52:55.889475 containerd[1670]: time="2026-01-20T23:52:55.889400798Z" level=info msg="CreateContainer within sandbox \"06bc1d9e68cf7fa2d7588a02a6add80af6a7507268975adcde6b53facd687b7a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:0,}" Jan 20 23:52:55.904511 containerd[1670]: time="2026-01-20T23:52:55.904081800Z" level=info msg="Container 5f99e184344ef00ad6b782963eca67a8619e168f3c528e8344c5a98d8a7a5a93: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:52:55.910972 containerd[1670]: time="2026-01-20T23:52:55.910859459Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-scheduler-ci-4547-0-0-n-e5b472a427,Uid:86ddab5f390526bf72acefd3fbf95fc0,Namespace:kube-system,Attempt:0,} returns sandbox id \"93bb9897e0a7b5a8e381cd2cb1281831cb5f12ebf87280ce7786963a4392c7c2\"" Jan 20 23:52:55.916231 containerd[1670]: time="2026-01-20T23:52:55.916192954Z" level=info msg="CreateContainer within sandbox \"93bb9897e0a7b5a8e381cd2cb1281831cb5f12ebf87280ce7786963a4392c7c2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:0,}" Jan 20 23:52:55.920951 containerd[1670]: time="2026-01-20T23:52:55.920876688Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-apiserver-ci-4547-0-0-n-e5b472a427,Uid:c12215e7a4d14fdc59c3b2255fa9fee0,Namespace:kube-system,Attempt:0,} returns sandbox id \"24d1ee4a681145c76edb8263e6c7e7dd5e383398afa82f2e72d9bf9d7f394312\"" Jan 20 23:52:55.922604 containerd[1670]: time="2026-01-20T23:52:55.922572132Z" level=info msg="CreateContainer within sandbox \"06bc1d9e68cf7fa2d7588a02a6add80af6a7507268975adcde6b53facd687b7a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:0,} returns container id \"5f99e184344ef00ad6b782963eca67a8619e168f3c528e8344c5a98d8a7a5a93\"" Jan 20 23:52:55.924213 containerd[1670]: time="2026-01-20T23:52:55.923371135Z" level=info msg="StartContainer for \"5f99e184344ef00ad6b782963eca67a8619e168f3c528e8344c5a98d8a7a5a93\"" Jan 20 23:52:55.925416 containerd[1670]: time="2026-01-20T23:52:55.925392820Z" level=info msg="connecting to shim 5f99e184344ef00ad6b782963eca67a8619e168f3c528e8344c5a98d8a7a5a93" address="unix:///run/containerd/s/cad77158e5571cd5da02ac3a46ad66df1e9ff9afaacb3604ec8322968ef95f59" protocol=ttrpc version=3 Jan 20 23:52:55.927216 containerd[1670]: time="2026-01-20T23:52:55.927188666Z" level=info msg="CreateContainer within sandbox \"24d1ee4a681145c76edb8263e6c7e7dd5e383398afa82f2e72d9bf9d7f394312\" for container &ContainerMetadata{Name:kube-apiserver,Attempt:0,}" Jan 20 23:52:55.931737 containerd[1670]: time="2026-01-20T23:52:55.931700518Z" level=info msg="Container 02ef16ecc5dea6e9a8ded426c3fa79343cda1cd435b1cb0b97e5195d66406b37: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:52:55.937001 kubelet[2562]: E0120 23:52:55.936963 2562 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://10.0.2.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-0-0-n-e5b472a427?timeout=10s\": dial tcp 10.0.2.209:6443: connect: connection refused" interval="800ms" Jan 20 23:52:55.938078 containerd[1670]: time="2026-01-20T23:52:55.938038417Z" level=info msg="Container a0bfb30f441eaf1ed98c67dcc8a90d53f48e51baa49f461fd268b3241be20c15: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:52:55.942467 containerd[1670]: time="2026-01-20T23:52:55.942420909Z" level=info msg="CreateContainer within sandbox \"93bb9897e0a7b5a8e381cd2cb1281831cb5f12ebf87280ce7786963a4392c7c2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:0,} returns container id \"02ef16ecc5dea6e9a8ded426c3fa79343cda1cd435b1cb0b97e5195d66406b37\"" Jan 20 23:52:55.942969 containerd[1670]: time="2026-01-20T23:52:55.942942431Z" level=info msg="StartContainer for \"02ef16ecc5dea6e9a8ded426c3fa79343cda1cd435b1cb0b97e5195d66406b37\"" Jan 20 23:52:55.944011 containerd[1670]: time="2026-01-20T23:52:55.943984714Z" level=info msg="connecting to shim 02ef16ecc5dea6e9a8ded426c3fa79343cda1cd435b1cb0b97e5195d66406b37" address="unix:///run/containerd/s/77750dce380f3365929df256a0617d011d006192a7e7b031fdd8484e3499a26e" protocol=ttrpc version=3 Jan 20 23:52:55.945668 systemd[1]: Started cri-containerd-5f99e184344ef00ad6b782963eca67a8619e168f3c528e8344c5a98d8a7a5a93.scope - libcontainer container 5f99e184344ef00ad6b782963eca67a8619e168f3c528e8344c5a98d8a7a5a93. Jan 20 23:52:55.949438 containerd[1670]: time="2026-01-20T23:52:55.949328089Z" level=info msg="CreateContainer within sandbox \"24d1ee4a681145c76edb8263e6c7e7dd5e383398afa82f2e72d9bf9d7f394312\" for &ContainerMetadata{Name:kube-apiserver,Attempt:0,} returns container id \"a0bfb30f441eaf1ed98c67dcc8a90d53f48e51baa49f461fd268b3241be20c15\"" Jan 20 23:52:55.949863 containerd[1670]: time="2026-01-20T23:52:55.949813410Z" level=info msg="StartContainer for \"a0bfb30f441eaf1ed98c67dcc8a90d53f48e51baa49f461fd268b3241be20c15\"" Jan 20 23:52:55.953973 containerd[1670]: time="2026-01-20T23:52:55.953932422Z" level=info msg="connecting to shim a0bfb30f441eaf1ed98c67dcc8a90d53f48e51baa49f461fd268b3241be20c15" address="unix:///run/containerd/s/fad34358c7a7bf819b036e98077d9ddea1069473dc342dbee546fb2f4b967850" protocol=ttrpc version=3 Jan 20 23:52:55.964697 systemd[1]: Started cri-containerd-02ef16ecc5dea6e9a8ded426c3fa79343cda1cd435b1cb0b97e5195d66406b37.scope - libcontainer container 02ef16ecc5dea6e9a8ded426c3fa79343cda1cd435b1cb0b97e5195d66406b37. Jan 20 23:52:55.967000 audit: BPF prog-id=98 op=LOAD Jan 20 23:52:55.968000 audit: BPF prog-id=99 op=LOAD Jan 20 23:52:55.968000 audit[2736]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2608 pid=2736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.968000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566393965313834333434656630306164366237383239363365636136 Jan 20 23:52:55.968000 audit: BPF prog-id=99 op=UNLOAD Jan 20 23:52:55.968000 audit[2736]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2608 pid=2736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.968000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566393965313834333434656630306164366237383239363365636136 Jan 20 23:52:55.968000 audit: BPF prog-id=100 op=LOAD Jan 20 23:52:55.968000 audit[2736]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2608 pid=2736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.968000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566393965313834333434656630306164366237383239363365636136 Jan 20 23:52:55.968000 audit: BPF prog-id=101 op=LOAD Jan 20 23:52:55.968000 audit[2736]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2608 pid=2736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.968000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566393965313834333434656630306164366237383239363365636136 Jan 20 23:52:55.968000 audit: BPF prog-id=101 op=UNLOAD Jan 20 23:52:55.968000 audit[2736]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2608 pid=2736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.968000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566393965313834333434656630306164366237383239363365636136 Jan 20 23:52:55.968000 audit: BPF prog-id=100 op=UNLOAD Jan 20 23:52:55.968000 audit[2736]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2608 pid=2736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.968000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566393965313834333434656630306164366237383239363365636136 Jan 20 23:52:55.968000 audit: BPF prog-id=102 op=LOAD Jan 20 23:52:55.968000 audit[2736]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2608 pid=2736 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.968000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3566393965313834333434656630306164366237383239363365636136 Jan 20 23:52:55.972117 systemd[1]: Started cri-containerd-a0bfb30f441eaf1ed98c67dcc8a90d53f48e51baa49f461fd268b3241be20c15.scope - libcontainer container a0bfb30f441eaf1ed98c67dcc8a90d53f48e51baa49f461fd268b3241be20c15. Jan 20 23:52:55.979000 audit: BPF prog-id=103 op=LOAD Jan 20 23:52:55.979000 audit: BPF prog-id=104 op=LOAD Jan 20 23:52:55.979000 audit[2752]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138180 a2=98 a3=0 items=0 ppid=2643 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.979000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032656631366563633564656136653961386465643432366333666137 Jan 20 23:52:55.979000 audit: BPF prog-id=104 op=UNLOAD Jan 20 23:52:55.979000 audit[2752]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2643 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.979000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032656631366563633564656136653961386465643432366333666137 Jan 20 23:52:55.980000 audit: BPF prog-id=105 op=LOAD Jan 20 23:52:55.980000 audit[2752]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001383e8 a2=98 a3=0 items=0 ppid=2643 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032656631366563633564656136653961386465643432366333666137 Jan 20 23:52:55.980000 audit: BPF prog-id=106 op=LOAD Jan 20 23:52:55.980000 audit[2752]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000138168 a2=98 a3=0 items=0 ppid=2643 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.980000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032656631366563633564656136653961386465643432366333666137 Jan 20 23:52:55.981000 audit: BPF prog-id=106 op=UNLOAD Jan 20 23:52:55.981000 audit[2752]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2643 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032656631366563633564656136653961386465643432366333666137 Jan 20 23:52:55.981000 audit: BPF prog-id=105 op=UNLOAD Jan 20 23:52:55.981000 audit[2752]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2643 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032656631366563633564656136653961386465643432366333666137 Jan 20 23:52:55.981000 audit: BPF prog-id=107 op=LOAD Jan 20 23:52:55.981000 audit[2752]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000138648 a2=98 a3=0 items=0 ppid=2643 pid=2752 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.981000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032656631366563633564656136653961386465643432366333666137 Jan 20 23:52:55.987000 audit: BPF prog-id=108 op=LOAD Jan 20 23:52:55.988000 audit: BPF prog-id=109 op=LOAD Jan 20 23:52:55.988000 audit[2771]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2652 pid=2771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.988000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130626662333066343431656166316564393863363764636338613930 Jan 20 23:52:55.989000 audit: BPF prog-id=109 op=UNLOAD Jan 20 23:52:55.989000 audit[2771]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2652 pid=2771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130626662333066343431656166316564393863363764636338613930 Jan 20 23:52:55.989000 audit: BPF prog-id=110 op=LOAD Jan 20 23:52:55.989000 audit[2771]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2652 pid=2771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130626662333066343431656166316564393863363764636338613930 Jan 20 23:52:55.989000 audit: BPF prog-id=111 op=LOAD Jan 20 23:52:55.989000 audit[2771]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2652 pid=2771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130626662333066343431656166316564393863363764636338613930 Jan 20 23:52:55.989000 audit: BPF prog-id=111 op=UNLOAD Jan 20 23:52:55.989000 audit[2771]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2652 pid=2771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130626662333066343431656166316564393863363764636338613930 Jan 20 23:52:55.989000 audit: BPF prog-id=110 op=UNLOAD Jan 20 23:52:55.989000 audit[2771]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2652 pid=2771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130626662333066343431656166316564393863363764636338613930 Jan 20 23:52:55.989000 audit: BPF prog-id=112 op=LOAD Jan 20 23:52:55.989000 audit[2771]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2652 pid=2771 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:52:55.989000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6130626662333066343431656166316564393863363764636338613930 Jan 20 23:52:56.004322 containerd[1670]: time="2026-01-20T23:52:56.004111726Z" level=info msg="StartContainer for \"5f99e184344ef00ad6b782963eca67a8619e168f3c528e8344c5a98d8a7a5a93\" returns successfully" Jan 20 23:52:56.016567 containerd[1670]: time="2026-01-20T23:52:56.015157677Z" level=info msg="StartContainer for \"02ef16ecc5dea6e9a8ded426c3fa79343cda1cd435b1cb0b97e5195d66406b37\" returns successfully" Jan 20 23:52:56.026644 containerd[1670]: time="2026-01-20T23:52:56.026228229Z" level=info msg="StartContainer for \"a0bfb30f441eaf1ed98c67dcc8a90d53f48e51baa49f461fd268b3241be20c15\" returns successfully" Jan 20 23:52:56.106918 kubelet[2562]: I0120 23:52:56.106802 2562 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-n-e5b472a427" Jan 20 23:52:56.107218 kubelet[2562]: E0120 23:52:56.107181 2562 kubelet_node_status.go:107] "Unable to register node with API server" err="Post \"https://10.0.2.209:6443/api/v1/nodes\": dial tcp 10.0.2.209:6443: connect: connection refused" node="ci-4547-0-0-n-e5b472a427" Jan 20 23:52:56.369979 kubelet[2562]: E0120 23:52:56.369880 2562 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-e5b472a427\" not found" node="ci-4547-0-0-n-e5b472a427" Jan 20 23:52:56.373624 kubelet[2562]: E0120 23:52:56.373189 2562 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-e5b472a427\" not found" node="ci-4547-0-0-n-e5b472a427" Jan 20 23:52:56.377202 kubelet[2562]: E0120 23:52:56.377171 2562 kubelet.go:3305] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"ci-4547-0-0-n-e5b472a427\" not found" node="ci-4547-0-0-n-e5b472a427" Jan 20 23:52:56.910469 kubelet[2562]: I0120 23:52:56.909738 2562 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-n-e5b472a427" Jan 20 23:52:57.086824 kubelet[2562]: E0120 23:52:57.086772 2562 nodelease.go:49] "Failed to get node when trying to set owner ref to the node lease" err="nodes \"ci-4547-0-0-n-e5b472a427\" not found" node="ci-4547-0-0-n-e5b472a427" Jan 20 23:52:57.165659 kubelet[2562]: I0120 23:52:57.165502 2562 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547-0-0-n-e5b472a427" Jan 20 23:52:57.230990 kubelet[2562]: I0120 23:52:57.230923 2562 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547-0-0-n-e5b472a427" Jan 20 23:52:57.236409 kubelet[2562]: E0120 23:52:57.236364 2562 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-controller-manager-ci-4547-0-0-n-e5b472a427\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-controller-manager-ci-4547-0-0-n-e5b472a427" Jan 20 23:52:57.236409 kubelet[2562]: I0120 23:52:57.236399 2562 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-0-0-n-e5b472a427" Jan 20 23:52:57.238032 kubelet[2562]: E0120 23:52:57.237990 2562 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547-0-0-n-e5b472a427\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4547-0-0-n-e5b472a427" Jan 20 23:52:57.238032 kubelet[2562]: I0120 23:52:57.238019 2562 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547-0-0-n-e5b472a427" Jan 20 23:52:57.240989 kubelet[2562]: E0120 23:52:57.240942 2562 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547-0-0-n-e5b472a427\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4547-0-0-n-e5b472a427" Jan 20 23:52:57.319272 kubelet[2562]: I0120 23:52:57.319125 2562 apiserver.go:52] "Watching apiserver" Jan 20 23:52:57.330960 kubelet[2562]: I0120 23:52:57.330911 2562 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 23:52:57.379059 kubelet[2562]: I0120 23:52:57.379021 2562 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-0-0-n-e5b472a427" Jan 20 23:52:57.379401 kubelet[2562]: I0120 23:52:57.379374 2562 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547-0-0-n-e5b472a427" Jan 20 23:52:57.388252 kubelet[2562]: E0120 23:52:57.388189 2562 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547-0-0-n-e5b472a427\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-apiserver-ci-4547-0-0-n-e5b472a427" Jan 20 23:52:57.390850 kubelet[2562]: E0120 23:52:57.390711 2562 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547-0-0-n-e5b472a427\" is forbidden: no PriorityClass with name system-node-critical was found" pod="kube-system/kube-scheduler-ci-4547-0-0-n-e5b472a427" Jan 20 23:52:58.380693 kubelet[2562]: I0120 23:52:58.380658 2562 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-0-0-n-e5b472a427" Jan 20 23:52:59.547388 systemd[1]: Reload requested from client PID 2849 ('systemctl') (unit session-10.scope)... Jan 20 23:52:59.547406 systemd[1]: Reloading... Jan 20 23:52:59.614493 zram_generator::config[2895]: No configuration found. Jan 20 23:52:59.805547 systemd[1]: Reloading finished in 257 ms. Jan 20 23:52:59.833810 systemd[1]: Stopping kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 23:52:59.846142 systemd[1]: kubelet.service: Deactivated successfully. Jan 20 23:52:59.846000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:52:59.847290 systemd[1]: Stopped kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 23:52:59.847365 systemd[1]: kubelet.service: Consumed 1.133s CPU time, 128.9M memory peak. Jan 20 23:52:59.849579 systemd[1]: Starting kubelet.service - kubelet: The Kubernetes Node Agent... Jan 20 23:52:59.850000 audit: BPF prog-id=113 op=LOAD Jan 20 23:52:59.850000 audit: BPF prog-id=73 op=UNLOAD Jan 20 23:52:59.850000 audit: BPF prog-id=114 op=LOAD Jan 20 23:52:59.850000 audit: BPF prog-id=115 op=LOAD Jan 20 23:52:59.850000 audit: BPF prog-id=74 op=UNLOAD Jan 20 23:52:59.851000 audit: BPF prog-id=75 op=UNLOAD Jan 20 23:52:59.851000 audit: BPF prog-id=116 op=LOAD Jan 20 23:52:59.851000 audit: BPF prog-id=69 op=UNLOAD Jan 20 23:52:59.852000 audit: BPF prog-id=117 op=LOAD Jan 20 23:52:59.852000 audit: BPF prog-id=118 op=LOAD Jan 20 23:52:59.852000 audit: BPF prog-id=70 op=UNLOAD Jan 20 23:52:59.852000 audit: BPF prog-id=71 op=UNLOAD Jan 20 23:52:59.853000 audit: BPF prog-id=119 op=LOAD Jan 20 23:52:59.853000 audit: BPF prog-id=68 op=UNLOAD Jan 20 23:52:59.854000 audit: BPF prog-id=120 op=LOAD Jan 20 23:52:59.866000 audit: BPF prog-id=121 op=LOAD Jan 20 23:52:59.866000 audit: BPF prog-id=66 op=UNLOAD Jan 20 23:52:59.866000 audit: BPF prog-id=67 op=UNLOAD Jan 20 23:52:59.867000 audit: BPF prog-id=122 op=LOAD Jan 20 23:52:59.867000 audit: BPF prog-id=63 op=UNLOAD Jan 20 23:52:59.867000 audit: BPF prog-id=123 op=LOAD Jan 20 23:52:59.868000 audit: BPF prog-id=124 op=LOAD Jan 20 23:52:59.868000 audit: BPF prog-id=64 op=UNLOAD Jan 20 23:52:59.868000 audit: BPF prog-id=65 op=UNLOAD Jan 20 23:52:59.869000 audit: BPF prog-id=125 op=LOAD Jan 20 23:52:59.869000 audit: BPF prog-id=79 op=UNLOAD Jan 20 23:52:59.870000 audit: BPF prog-id=126 op=LOAD Jan 20 23:52:59.870000 audit: BPF prog-id=72 op=UNLOAD Jan 20 23:52:59.871000 audit: BPF prog-id=127 op=LOAD Jan 20 23:52:59.871000 audit: BPF prog-id=80 op=UNLOAD Jan 20 23:52:59.871000 audit: BPF prog-id=128 op=LOAD Jan 20 23:52:59.871000 audit: BPF prog-id=129 op=LOAD Jan 20 23:52:59.871000 audit: BPF prog-id=81 op=UNLOAD Jan 20 23:52:59.871000 audit: BPF prog-id=82 op=UNLOAD Jan 20 23:52:59.872000 audit: BPF prog-id=130 op=LOAD Jan 20 23:52:59.872000 audit: BPF prog-id=76 op=UNLOAD Jan 20 23:52:59.873000 audit: BPF prog-id=131 op=LOAD Jan 20 23:52:59.873000 audit: BPF prog-id=132 op=LOAD Jan 20 23:52:59.873000 audit: BPF prog-id=77 op=UNLOAD Jan 20 23:52:59.873000 audit: BPF prog-id=78 op=UNLOAD Jan 20 23:53:00.006451 systemd[1]: Started kubelet.service - kubelet: The Kubernetes Node Agent. Jan 20 23:53:00.006000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=kubelet comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:00.019846 (kubelet)[2940]: kubelet.service: Referenced but unset environment variable evaluates to an empty string: KUBELET_EXTRA_ARGS Jan 20 23:53:00.069430 kubelet[2940]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 23:53:00.069430 kubelet[2940]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 20 23:53:00.069430 kubelet[2940]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 20 23:53:00.069762 kubelet[2940]: I0120 23:53:00.069434 2940 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 20 23:53:00.074956 kubelet[2940]: I0120 23:53:00.074908 2940 server.go:530] "Kubelet version" kubeletVersion="v1.33.0" Jan 20 23:53:00.074956 kubelet[2940]: I0120 23:53:00.074945 2940 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 20 23:53:00.075140 kubelet[2940]: I0120 23:53:00.075124 2940 server.go:956] "Client rotation is on, will bootstrap in background" Jan 20 23:53:00.076411 kubelet[2940]: I0120 23:53:00.076384 2940 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 20 23:53:00.079787 kubelet[2940]: I0120 23:53:00.079754 2940 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/pki/ca.crt" Jan 20 23:53:00.083429 kubelet[2940]: I0120 23:53:00.083406 2940 server.go:1446] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 20 23:53:00.085993 kubelet[2940]: I0120 23:53:00.085961 2940 server.go:782] "--cgroups-per-qos enabled, but --cgroup-root was not specified. defaulting to /" Jan 20 23:53:00.086216 kubelet[2940]: I0120 23:53:00.086191 2940 container_manager_linux.go:267] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 20 23:53:00.086376 kubelet[2940]: I0120 23:53:00.086217 2940 container_manager_linux.go:272] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"ci-4547-0-0-n-e5b472a427","RuntimeCgroupsName":"","SystemCgroupsName":"","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":false,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":null,"HardEvictionThresholds":[{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":-1,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 20 23:53:00.086465 kubelet[2940]: I0120 23:53:00.086386 2940 topology_manager.go:138] "Creating topology manager with none policy" Jan 20 23:53:00.086465 kubelet[2940]: I0120 23:53:00.086393 2940 container_manager_linux.go:303] "Creating device plugin manager" Jan 20 23:53:00.086465 kubelet[2940]: I0120 23:53:00.086434 2940 state_mem.go:36] "Initialized new in-memory state store" Jan 20 23:53:00.086673 kubelet[2940]: I0120 23:53:00.086657 2940 kubelet.go:480] "Attempting to sync node with API server" Jan 20 23:53:00.086708 kubelet[2940]: I0120 23:53:00.086674 2940 kubelet.go:375] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 20 23:53:00.086708 kubelet[2940]: I0120 23:53:00.086698 2940 kubelet.go:386] "Adding apiserver pod source" Jan 20 23:53:00.087408 kubelet[2940]: I0120 23:53:00.086711 2940 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 20 23:53:00.087906 kubelet[2940]: I0120 23:53:00.087877 2940 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="containerd" version="v2.1.5" apiVersion="v1" Jan 20 23:53:00.089188 kubelet[2940]: I0120 23:53:00.089155 2940 kubelet.go:935] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 20 23:53:00.098737 kubelet[2940]: I0120 23:53:00.098698 2940 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 20 23:53:00.098867 kubelet[2940]: I0120 23:53:00.098750 2940 server.go:1289] "Started kubelet" Jan 20 23:53:00.101763 kubelet[2940]: I0120 23:53:00.101685 2940 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 20 23:53:00.102036 kubelet[2940]: I0120 23:53:00.102015 2940 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 20 23:53:00.102113 kubelet[2940]: I0120 23:53:00.102089 2940 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 20 23:53:00.102189 kubelet[2940]: I0120 23:53:00.102175 2940 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 20 23:53:00.103189 kubelet[2940]: I0120 23:53:00.103110 2940 server.go:317] "Adding debug handlers to kubelet server" Jan 20 23:53:00.109043 kubelet[2940]: I0120 23:53:00.108979 2940 dynamic_serving_content.go:135] "Starting controller" name="kubelet-server-cert-files::/var/lib/kubelet/pki/kubelet.crt::/var/lib/kubelet/pki/kubelet.key" Jan 20 23:53:00.111894 kubelet[2940]: I0120 23:53:00.111856 2940 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 20 23:53:00.112021 kubelet[2940]: I0120 23:53:00.111984 2940 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 20 23:53:00.112112 kubelet[2940]: I0120 23:53:00.112092 2940 reconciler.go:26] "Reconciler: start to sync state" Jan 20 23:53:00.115154 kubelet[2940]: E0120 23:53:00.115094 2940 kubelet.go:1600] "Image garbage collection failed once. Stats initialization may not have completed yet" err="invalid capacity 0 on image filesystem" Jan 20 23:53:00.116085 kubelet[2940]: I0120 23:53:00.116034 2940 factory.go:223] Registration of the systemd container factory successfully Jan 20 23:53:00.116211 kubelet[2940]: I0120 23:53:00.116156 2940 factory.go:221] Registration of the crio container factory failed: Get "http://%2Fvar%2Frun%2Fcrio%2Fcrio.sock/info": dial unix /var/run/crio/crio.sock: connect: no such file or directory Jan 20 23:53:00.119528 kubelet[2940]: I0120 23:53:00.119480 2940 factory.go:223] Registration of the containerd container factory successfully Jan 20 23:53:00.122972 kubelet[2940]: I0120 23:53:00.122794 2940 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 20 23:53:00.123865 kubelet[2940]: I0120 23:53:00.123844 2940 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 20 23:53:00.123981 kubelet[2940]: I0120 23:53:00.123970 2940 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 20 23:53:00.124058 kubelet[2940]: I0120 23:53:00.124046 2940 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 20 23:53:00.124107 kubelet[2940]: I0120 23:53:00.124099 2940 kubelet.go:2436] "Starting kubelet main sync loop" Jan 20 23:53:00.124201 kubelet[2940]: E0120 23:53:00.124184 2940 kubelet.go:2460] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 20 23:53:00.165175 kubelet[2940]: I0120 23:53:00.165145 2940 cpu_manager.go:221] "Starting CPU manager" policy="none" Jan 20 23:53:00.165175 kubelet[2940]: I0120 23:53:00.165166 2940 cpu_manager.go:222] "Reconciling" reconcilePeriod="10s" Jan 20 23:53:00.165175 kubelet[2940]: I0120 23:53:00.165188 2940 state_mem.go:36] "Initialized new in-memory state store" Jan 20 23:53:00.165412 kubelet[2940]: I0120 23:53:00.165323 2940 state_mem.go:88] "Updated default CPUSet" cpuSet="" Jan 20 23:53:00.165412 kubelet[2940]: I0120 23:53:00.165332 2940 state_mem.go:96] "Updated CPUSet assignments" assignments={} Jan 20 23:53:00.165412 kubelet[2940]: I0120 23:53:00.165349 2940 policy_none.go:49] "None policy: Start" Jan 20 23:53:00.165412 kubelet[2940]: I0120 23:53:00.165358 2940 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 20 23:53:00.165412 kubelet[2940]: I0120 23:53:00.165366 2940 state_mem.go:35] "Initializing new in-memory state store" Jan 20 23:53:00.165571 kubelet[2940]: I0120 23:53:00.165474 2940 state_mem.go:75] "Updated machine memory state" Jan 20 23:53:00.169544 kubelet[2940]: E0120 23:53:00.169511 2940 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 20 23:53:00.169879 kubelet[2940]: I0120 23:53:00.169844 2940 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 20 23:53:00.169970 kubelet[2940]: I0120 23:53:00.169861 2940 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 20 23:53:00.170167 kubelet[2940]: I0120 23:53:00.170145 2940 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 20 23:53:00.171862 kubelet[2940]: E0120 23:53:00.171776 2940 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="no imagefs label for configured runtime" Jan 20 23:53:00.225740 kubelet[2940]: I0120 23:53:00.225665 2940 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-scheduler-ci-4547-0-0-n-e5b472a427" Jan 20 23:53:00.225740 kubelet[2940]: I0120 23:53:00.225735 2940 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547-0-0-n-e5b472a427" Jan 20 23:53:00.225913 kubelet[2940]: I0120 23:53:00.225665 2940 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-controller-manager-ci-4547-0-0-n-e5b472a427" Jan 20 23:53:00.234051 kubelet[2940]: E0120 23:53:00.234010 2940 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-scheduler-ci-4547-0-0-n-e5b472a427\" already exists" pod="kube-system/kube-scheduler-ci-4547-0-0-n-e5b472a427" Jan 20 23:53:00.276717 kubelet[2940]: I0120 23:53:00.276687 2940 kubelet_node_status.go:75] "Attempting to register node" node="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:00.289620 kubelet[2940]: I0120 23:53:00.289548 2940 kubelet_node_status.go:124] "Node was previously registered" node="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:00.289753 kubelet[2940]: I0120 23:53:00.289647 2940 kubelet_node_status.go:78] "Successfully registered node" node="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:00.313020 kubelet[2940]: I0120 23:53:00.312947 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/86ddab5f390526bf72acefd3fbf95fc0-kubeconfig\") pod \"kube-scheduler-ci-4547-0-0-n-e5b472a427\" (UID: \"86ddab5f390526bf72acefd3fbf95fc0\") " pod="kube-system/kube-scheduler-ci-4547-0-0-n-e5b472a427" Jan 20 23:53:00.313020 kubelet[2940]: I0120 23:53:00.312993 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/c12215e7a4d14fdc59c3b2255fa9fee0-ca-certs\") pod \"kube-apiserver-ci-4547-0-0-n-e5b472a427\" (UID: \"c12215e7a4d14fdc59c3b2255fa9fee0\") " pod="kube-system/kube-apiserver-ci-4547-0-0-n-e5b472a427" Jan 20 23:53:00.313210 kubelet[2940]: I0120 23:53:00.313065 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/c12215e7a4d14fdc59c3b2255fa9fee0-k8s-certs\") pod \"kube-apiserver-ci-4547-0-0-n-e5b472a427\" (UID: \"c12215e7a4d14fdc59c3b2255fa9fee0\") " pod="kube-system/kube-apiserver-ci-4547-0-0-n-e5b472a427" Jan 20 23:53:00.313210 kubelet[2940]: I0120 23:53:00.313101 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/c12215e7a4d14fdc59c3b2255fa9fee0-usr-share-ca-certificates\") pod \"kube-apiserver-ci-4547-0-0-n-e5b472a427\" (UID: \"c12215e7a4d14fdc59c3b2255fa9fee0\") " pod="kube-system/kube-apiserver-ci-4547-0-0-n-e5b472a427" Jan 20 23:53:00.313210 kubelet[2940]: I0120 23:53:00.313133 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"k8s-certs\" (UniqueName: \"kubernetes.io/host-path/647886ebc69bbf332a0c27d233bcf7d8-k8s-certs\") pod \"kube-controller-manager-ci-4547-0-0-n-e5b472a427\" (UID: \"647886ebc69bbf332a0c27d233bcf7d8\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-e5b472a427" Jan 20 23:53:00.313210 kubelet[2940]: I0120 23:53:00.313152 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubeconfig\" (UniqueName: \"kubernetes.io/host-path/647886ebc69bbf332a0c27d233bcf7d8-kubeconfig\") pod \"kube-controller-manager-ci-4547-0-0-n-e5b472a427\" (UID: \"647886ebc69bbf332a0c27d233bcf7d8\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-e5b472a427" Jan 20 23:53:00.313210 kubelet[2940]: I0120 23:53:00.313168 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-share-ca-certificates\" (UniqueName: \"kubernetes.io/host-path/647886ebc69bbf332a0c27d233bcf7d8-usr-share-ca-certificates\") pod \"kube-controller-manager-ci-4547-0-0-n-e5b472a427\" (UID: \"647886ebc69bbf332a0c27d233bcf7d8\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-e5b472a427" Jan 20 23:53:00.313314 kubelet[2940]: I0120 23:53:00.313192 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-certs\" (UniqueName: \"kubernetes.io/host-path/647886ebc69bbf332a0c27d233bcf7d8-ca-certs\") pod \"kube-controller-manager-ci-4547-0-0-n-e5b472a427\" (UID: \"647886ebc69bbf332a0c27d233bcf7d8\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-e5b472a427" Jan 20 23:53:00.313314 kubelet[2940]: I0120 23:53:00.313217 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvolume-dir\" (UniqueName: \"kubernetes.io/host-path/647886ebc69bbf332a0c27d233bcf7d8-flexvolume-dir\") pod \"kube-controller-manager-ci-4547-0-0-n-e5b472a427\" (UID: \"647886ebc69bbf332a0c27d233bcf7d8\") " pod="kube-system/kube-controller-manager-ci-4547-0-0-n-e5b472a427" Jan 20 23:53:01.088512 kubelet[2940]: I0120 23:53:01.087795 2940 apiserver.go:52] "Watching apiserver" Jan 20 23:53:01.112855 kubelet[2940]: I0120 23:53:01.112799 2940 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 20 23:53:01.147510 kubelet[2940]: I0120 23:53:01.147264 2940 kubelet.go:3309] "Creating a mirror pod for static pod" pod="kube-system/kube-apiserver-ci-4547-0-0-n-e5b472a427" Jan 20 23:53:01.152536 kubelet[2940]: E0120 23:53:01.152497 2940 kubelet.go:3311] "Failed creating a mirror pod" err="pods \"kube-apiserver-ci-4547-0-0-n-e5b472a427\" already exists" pod="kube-system/kube-apiserver-ci-4547-0-0-n-e5b472a427" Jan 20 23:53:01.174140 kubelet[2940]: I0120 23:53:01.174073 2940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-apiserver-ci-4547-0-0-n-e5b472a427" podStartSLOduration=1.174054387 podStartE2EDuration="1.174054387s" podCreationTimestamp="2026-01-20 23:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 23:53:01.166040324 +0000 UTC m=+1.140159379" watchObservedRunningTime="2026-01-20 23:53:01.174054387 +0000 UTC m=+1.148173442" Jan 20 23:53:01.174315 kubelet[2940]: I0120 23:53:01.174228 2940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-scheduler-ci-4547-0-0-n-e5b472a427" podStartSLOduration=3.174223028 podStartE2EDuration="3.174223028s" podCreationTimestamp="2026-01-20 23:52:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 23:53:01.173483865 +0000 UTC m=+1.147602920" watchObservedRunningTime="2026-01-20 23:53:01.174223028 +0000 UTC m=+1.148342043" Jan 20 23:53:01.196500 kubelet[2940]: I0120 23:53:01.196376 2940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-controller-manager-ci-4547-0-0-n-e5b472a427" podStartSLOduration=1.196359971 podStartE2EDuration="1.196359971s" podCreationTimestamp="2026-01-20 23:53:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 23:53:01.186242582 +0000 UTC m=+1.160361637" watchObservedRunningTime="2026-01-20 23:53:01.196359971 +0000 UTC m=+1.170479026" Jan 20 23:53:04.722208 kubelet[2940]: I0120 23:53:04.722174 2940 kuberuntime_manager.go:1746] "Updating runtime config through cri with podcidr" CIDR="192.168.0.0/24" Jan 20 23:53:04.722978 containerd[1670]: time="2026-01-20T23:53:04.722944495Z" level=info msg="No cni config template is specified, wait for other system components to drop the config." Jan 20 23:53:04.723191 kubelet[2940]: I0120 23:53:04.723132 2940 kubelet_network.go:61] "Updating Pod CIDR" originalPodCIDR="" newPodCIDR="192.168.0.0/24" Jan 20 23:53:05.744608 systemd[1]: Created slice kubepods-besteffort-podf9d9c6b3_e74d_4c3c_880c_2b65f7bea653.slice - libcontainer container kubepods-besteffort-podf9d9c6b3_e74d_4c3c_880c_2b65f7bea653.slice. Jan 20 23:53:05.749582 kubelet[2940]: I0120 23:53:05.749541 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-proxy\" (UniqueName: \"kubernetes.io/configmap/f9d9c6b3-e74d-4c3c-880c-2b65f7bea653-kube-proxy\") pod \"kube-proxy-k8xv5\" (UID: \"f9d9c6b3-e74d-4c3c-880c-2b65f7bea653\") " pod="kube-system/kube-proxy-k8xv5" Jan 20 23:53:05.749582 kubelet[2940]: I0120 23:53:05.749587 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/f9d9c6b3-e74d-4c3c-880c-2b65f7bea653-xtables-lock\") pod \"kube-proxy-k8xv5\" (UID: \"f9d9c6b3-e74d-4c3c-880c-2b65f7bea653\") " pod="kube-system/kube-proxy-k8xv5" Jan 20 23:53:05.749920 kubelet[2940]: I0120 23:53:05.749605 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/f9d9c6b3-e74d-4c3c-880c-2b65f7bea653-lib-modules\") pod \"kube-proxy-k8xv5\" (UID: \"f9d9c6b3-e74d-4c3c-880c-2b65f7bea653\") " pod="kube-system/kube-proxy-k8xv5" Jan 20 23:53:05.749920 kubelet[2940]: I0120 23:53:05.749648 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wvnpk\" (UniqueName: \"kubernetes.io/projected/f9d9c6b3-e74d-4c3c-880c-2b65f7bea653-kube-api-access-wvnpk\") pod \"kube-proxy-k8xv5\" (UID: \"f9d9c6b3-e74d-4c3c-880c-2b65f7bea653\") " pod="kube-system/kube-proxy-k8xv5" Jan 20 23:53:05.814208 systemd[1]: Created slice kubepods-besteffort-pod3f4337ae_9506_4cd1_8704_137b82d51f09.slice - libcontainer container kubepods-besteffort-pod3f4337ae_9506_4cd1_8704_137b82d51f09.slice. Jan 20 23:53:05.850861 kubelet[2940]: I0120 23:53:05.850655 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/3f4337ae-9506-4cd1-8704-137b82d51f09-var-lib-calico\") pod \"tigera-operator-7dcd859c48-fdt5g\" (UID: \"3f4337ae-9506-4cd1-8704-137b82d51f09\") " pod="tigera-operator/tigera-operator-7dcd859c48-fdt5g" Jan 20 23:53:05.850861 kubelet[2940]: I0120 23:53:05.850701 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sf25\" (UniqueName: \"kubernetes.io/projected/3f4337ae-9506-4cd1-8704-137b82d51f09-kube-api-access-5sf25\") pod \"tigera-operator-7dcd859c48-fdt5g\" (UID: \"3f4337ae-9506-4cd1-8704-137b82d51f09\") " pod="tigera-operator/tigera-operator-7dcd859c48-fdt5g" Jan 20 23:53:06.062620 containerd[1670]: time="2026-01-20T23:53:06.062260727Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k8xv5,Uid:f9d9c6b3-e74d-4c3c-880c-2b65f7bea653,Namespace:kube-system,Attempt:0,}" Jan 20 23:53:06.082002 containerd[1670]: time="2026-01-20T23:53:06.081957663Z" level=info msg="connecting to shim c7fc5f7a590a640d4fdae82d719bf3114e8d2a6e5eaa2f4e1152432d5cc64513" address="unix:///run/containerd/s/8db184a84a2168220f5c763c48d3d202807bc71191daf2b8e8f1adce6da6b68f" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:53:06.109015 systemd[1]: Started cri-containerd-c7fc5f7a590a640d4fdae82d719bf3114e8d2a6e5eaa2f4e1152432d5cc64513.scope - libcontainer container c7fc5f7a590a640d4fdae82d719bf3114e8d2a6e5eaa2f4e1152432d5cc64513. Jan 20 23:53:06.117000 audit: BPF prog-id=133 op=LOAD Jan 20 23:53:06.118594 containerd[1670]: time="2026-01-20T23:53:06.118555328Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-fdt5g,Uid:3f4337ae-9506-4cd1-8704-137b82d51f09,Namespace:tigera-operator,Attempt:0,}" Jan 20 23:53:06.118800 kernel: kauditd_printk_skb: 200 callbacks suppressed Jan 20 23:53:06.118854 kernel: audit: type=1334 audit(1768953186.117:438): prog-id=133 op=LOAD Jan 20 23:53:06.119000 audit: BPF prog-id=134 op=LOAD Jan 20 23:53:06.121126 kernel: audit: type=1334 audit(1768953186.119:439): prog-id=134 op=LOAD Jan 20 23:53:06.121195 kernel: audit: type=1300 audit(1768953186.119:439): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3007 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.119000 audit[3018]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3007 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337666335663761353930613634306434666461653832643731396266 Jan 20 23:53:06.128472 kernel: audit: type=1327 audit(1768953186.119:439): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337666335663761353930613634306434666461653832643731396266 Jan 20 23:53:06.119000 audit: BPF prog-id=134 op=UNLOAD Jan 20 23:53:06.129752 kernel: audit: type=1334 audit(1768953186.119:440): prog-id=134 op=UNLOAD Jan 20 23:53:06.129841 kernel: audit: type=1300 audit(1768953186.119:440): arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3007 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.119000 audit[3018]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3007 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337666335663761353930613634306434666461653832643731396266 Jan 20 23:53:06.137366 kernel: audit: type=1327 audit(1768953186.119:440): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337666335663761353930613634306434666461653832643731396266 Jan 20 23:53:06.119000 audit: BPF prog-id=135 op=LOAD Jan 20 23:53:06.138908 kernel: audit: type=1334 audit(1768953186.119:441): prog-id=135 op=LOAD Jan 20 23:53:06.138972 kernel: audit: type=1300 audit(1768953186.119:441): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3007 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.119000 audit[3018]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3007 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.143860 containerd[1670]: time="2026-01-20T23:53:06.143740760Z" level=info msg="connecting to shim 10b4f8513b33e9c74e3ba307aa93cc843121721b3a87ede22fcac5ce1d6f6a0f" address="unix:///run/containerd/s/95ef60629d42e62855d3d23a825537ea6905bede6ea3b7f9e5c34c24d3a72036" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:53:06.119000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337666335663761353930613634306434666461653832643731396266 Jan 20 23:53:06.147446 kernel: audit: type=1327 audit(1768953186.119:441): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337666335663761353930613634306434666461653832643731396266 Jan 20 23:53:06.120000 audit: BPF prog-id=136 op=LOAD Jan 20 23:53:06.120000 audit[3018]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3007 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.120000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337666335663761353930613634306434666461653832643731396266 Jan 20 23:53:06.124000 audit: BPF prog-id=136 op=UNLOAD Jan 20 23:53:06.124000 audit[3018]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3007 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.124000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337666335663761353930613634306434666461653832643731396266 Jan 20 23:53:06.124000 audit: BPF prog-id=135 op=UNLOAD Jan 20 23:53:06.124000 audit[3018]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3007 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.124000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337666335663761353930613634306434666461653832643731396266 Jan 20 23:53:06.124000 audit: BPF prog-id=137 op=LOAD Jan 20 23:53:06.124000 audit[3018]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3007 pid=3018 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.124000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6337666335663761353930613634306434666461653832643731396266 Jan 20 23:53:06.160470 containerd[1670]: time="2026-01-20T23:53:06.160390768Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:kube-proxy-k8xv5,Uid:f9d9c6b3-e74d-4c3c-880c-2b65f7bea653,Namespace:kube-system,Attempt:0,} returns sandbox id \"c7fc5f7a590a640d4fdae82d719bf3114e8d2a6e5eaa2f4e1152432d5cc64513\"" Jan 20 23:53:06.167807 systemd[1]: Started cri-containerd-10b4f8513b33e9c74e3ba307aa93cc843121721b3a87ede22fcac5ce1d6f6a0f.scope - libcontainer container 10b4f8513b33e9c74e3ba307aa93cc843121721b3a87ede22fcac5ce1d6f6a0f. Jan 20 23:53:06.168753 containerd[1670]: time="2026-01-20T23:53:06.168708712Z" level=info msg="CreateContainer within sandbox \"c7fc5f7a590a640d4fdae82d719bf3114e8d2a6e5eaa2f4e1152432d5cc64513\" for container &ContainerMetadata{Name:kube-proxy,Attempt:0,}" Jan 20 23:53:06.179000 audit: BPF prog-id=138 op=LOAD Jan 20 23:53:06.179000 audit: BPF prog-id=139 op=LOAD Jan 20 23:53:06.179000 audit[3056]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3046 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130623466383531336233336539633734653362613330376161393363 Jan 20 23:53:06.179000 audit: BPF prog-id=139 op=UNLOAD Jan 20 23:53:06.179000 audit[3056]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3046 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.179000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130623466383531336233336539633734653362613330376161393363 Jan 20 23:53:06.180000 audit: BPF prog-id=140 op=LOAD Jan 20 23:53:06.180000 audit[3056]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3046 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130623466383531336233336539633734653362613330376161393363 Jan 20 23:53:06.180000 audit: BPF prog-id=141 op=LOAD Jan 20 23:53:06.180000 audit[3056]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3046 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130623466383531336233336539633734653362613330376161393363 Jan 20 23:53:06.180000 audit: BPF prog-id=141 op=UNLOAD Jan 20 23:53:06.180000 audit[3056]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3046 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130623466383531336233336539633734653362613330376161393363 Jan 20 23:53:06.180000 audit: BPF prog-id=140 op=UNLOAD Jan 20 23:53:06.180000 audit[3056]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3046 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130623466383531336233336539633734653362613330376161393363 Jan 20 23:53:06.180000 audit: BPF prog-id=142 op=LOAD Jan 20 23:53:06.180000 audit[3056]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3046 pid=3056 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.180000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3130623466383531336233336539633734653362613330376161393363 Jan 20 23:53:06.182849 containerd[1670]: time="2026-01-20T23:53:06.182807432Z" level=info msg="Container def3d1649e55cde79eb13b3fb70d70464b00d22abf4dc57bba55092fc246a4e2: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:53:06.192028 containerd[1670]: time="2026-01-20T23:53:06.191979138Z" level=info msg="CreateContainer within sandbox \"c7fc5f7a590a640d4fdae82d719bf3114e8d2a6e5eaa2f4e1152432d5cc64513\" for &ContainerMetadata{Name:kube-proxy,Attempt:0,} returns container id \"def3d1649e55cde79eb13b3fb70d70464b00d22abf4dc57bba55092fc246a4e2\"" Jan 20 23:53:06.192916 containerd[1670]: time="2026-01-20T23:53:06.192728940Z" level=info msg="StartContainer for \"def3d1649e55cde79eb13b3fb70d70464b00d22abf4dc57bba55092fc246a4e2\"" Jan 20 23:53:06.194617 containerd[1670]: time="2026-01-20T23:53:06.194566586Z" level=info msg="connecting to shim def3d1649e55cde79eb13b3fb70d70464b00d22abf4dc57bba55092fc246a4e2" address="unix:///run/containerd/s/8db184a84a2168220f5c763c48d3d202807bc71191daf2b8e8f1adce6da6b68f" protocol=ttrpc version=3 Jan 20 23:53:06.206431 containerd[1670]: time="2026-01-20T23:53:06.206386019Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:tigera-operator-7dcd859c48-fdt5g,Uid:3f4337ae-9506-4cd1-8704-137b82d51f09,Namespace:tigera-operator,Attempt:0,} returns sandbox id \"10b4f8513b33e9c74e3ba307aa93cc843121721b3a87ede22fcac5ce1d6f6a0f\"" Jan 20 23:53:06.208295 containerd[1670]: time="2026-01-20T23:53:06.208243625Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\"" Jan 20 23:53:06.219825 systemd[1]: Started cri-containerd-def3d1649e55cde79eb13b3fb70d70464b00d22abf4dc57bba55092fc246a4e2.scope - libcontainer container def3d1649e55cde79eb13b3fb70d70464b00d22abf4dc57bba55092fc246a4e2. Jan 20 23:53:06.279000 audit: BPF prog-id=143 op=LOAD Jan 20 23:53:06.279000 audit[3085]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3007 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.279000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465663364313634396535356364653739656231336233666237306437 Jan 20 23:53:06.279000 audit: BPF prog-id=144 op=LOAD Jan 20 23:53:06.279000 audit[3085]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3007 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.279000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465663364313634396535356364653739656231336233666237306437 Jan 20 23:53:06.279000 audit: BPF prog-id=144 op=UNLOAD Jan 20 23:53:06.279000 audit[3085]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3007 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.279000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465663364313634396535356364653739656231336233666237306437 Jan 20 23:53:06.279000 audit: BPF prog-id=143 op=UNLOAD Jan 20 23:53:06.279000 audit[3085]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3007 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.279000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465663364313634396535356364653739656231336233666237306437 Jan 20 23:53:06.280000 audit: BPF prog-id=145 op=LOAD Jan 20 23:53:06.280000 audit[3085]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3007 pid=3085 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.280000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6465663364313634396535356364653739656231336233666237306437 Jan 20 23:53:06.299441 containerd[1670]: time="2026-01-20T23:53:06.299308205Z" level=info msg="StartContainer for \"def3d1649e55cde79eb13b3fb70d70464b00d22abf4dc57bba55092fc246a4e2\" returns successfully" Jan 20 23:53:06.463000 audit[3157]: NETFILTER_CFG table=mangle:54 family=10 entries=1 op=nft_register_chain pid=3157 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:53:06.463000 audit[3157]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffe69bf990 a2=0 a3=1 items=0 ppid=3104 pid=3157 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.463000 audit[3156]: NETFILTER_CFG table=mangle:55 family=2 entries=1 op=nft_register_chain pid=3156 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:53:06.463000 audit[3156]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffec936da0 a2=0 a3=1 items=0 ppid=3104 pid=3156 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.463000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 20 23:53:06.463000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006D616E676C65 Jan 20 23:53:06.466000 audit[3159]: NETFILTER_CFG table=nat:56 family=2 entries=1 op=nft_register_chain pid=3159 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:53:06.466000 audit[3159]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffffc148b70 a2=0 a3=1 items=0 ppid=3104 pid=3159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.466000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 20 23:53:06.467000 audit[3163]: NETFILTER_CFG table=filter:57 family=2 entries=1 op=nft_register_chain pid=3163 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:53:06.467000 audit[3163]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdc33f0f0 a2=0 a3=1 items=0 ppid=3104 pid=3163 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.467000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 20 23:53:06.468000 audit[3162]: NETFILTER_CFG table=nat:58 family=10 entries=1 op=nft_register_chain pid=3162 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:53:06.468000 audit[3162]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd448f5e0 a2=0 a3=1 items=0 ppid=3104 pid=3162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.468000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D74006E6174 Jan 20 23:53:06.471000 audit[3164]: NETFILTER_CFG table=filter:59 family=10 entries=1 op=nft_register_chain pid=3164 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:53:06.471000 audit[3164]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffff7f9730 a2=0 a3=1 items=0 ppid=3104 pid=3164 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.471000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D43414E415259002D740066696C746572 Jan 20 23:53:06.564000 audit[3165]: NETFILTER_CFG table=filter:60 family=2 entries=1 op=nft_register_chain pid=3165 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:53:06.564000 audit[3165]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffd74bdd00 a2=0 a3=1 items=0 ppid=3104 pid=3165 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.564000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 20 23:53:06.567000 audit[3167]: NETFILTER_CFG table=filter:61 family=2 entries=1 op=nft_register_rule pid=3167 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:53:06.567000 audit[3167]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffd00248a0 a2=0 a3=1 items=0 ppid=3104 pid=3167 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.567000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276696365 Jan 20 23:53:06.571000 audit[3170]: NETFILTER_CFG table=filter:62 family=2 entries=1 op=nft_register_rule pid=3170 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:53:06.571000 audit[3170]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=ffffc6156bb0 a2=0 a3=1 items=0 ppid=3104 pid=3170 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.571000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C65207365727669 Jan 20 23:53:06.572000 audit[3171]: NETFILTER_CFG table=filter:63 family=2 entries=1 op=nft_register_chain pid=3171 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:53:06.572000 audit[3171]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff5c94370 a2=0 a3=1 items=0 ppid=3104 pid=3171 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.572000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 20 23:53:06.575000 audit[3173]: NETFILTER_CFG table=filter:64 family=2 entries=1 op=nft_register_rule pid=3173 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:53:06.575000 audit[3173]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffefb15570 a2=0 a3=1 items=0 ppid=3104 pid=3173 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.575000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 20 23:53:06.576000 audit[3174]: NETFILTER_CFG table=filter:65 family=2 entries=1 op=nft_register_chain pid=3174 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:53:06.576000 audit[3174]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffcf1ac220 a2=0 a3=1 items=0 ppid=3104 pid=3174 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.576000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 20 23:53:06.579000 audit[3176]: NETFILTER_CFG table=filter:66 family=2 entries=1 op=nft_register_rule pid=3176 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:53:06.579000 audit[3176]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffe9fd59a0 a2=0 a3=1 items=0 ppid=3104 pid=3176 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.579000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 20 23:53:06.583000 audit[3179]: NETFILTER_CFG table=filter:67 family=2 entries=1 op=nft_register_rule pid=3179 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:53:06.583000 audit[3179]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffef3c5ee0 a2=0 a3=1 items=0 ppid=3104 pid=3179 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.583000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D53 Jan 20 23:53:06.584000 audit[3180]: NETFILTER_CFG table=filter:68 family=2 entries=1 op=nft_register_chain pid=3180 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:53:06.584000 audit[3180]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc6d06110 a2=0 a3=1 items=0 ppid=3104 pid=3180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.584000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 20 23:53:06.587000 audit[3182]: NETFILTER_CFG table=filter:69 family=2 entries=1 op=nft_register_rule pid=3182 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:53:06.587000 audit[3182]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffff1b50fd0 a2=0 a3=1 items=0 ppid=3104 pid=3182 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.587000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 20 23:53:06.588000 audit[3183]: NETFILTER_CFG table=filter:70 family=2 entries=1 op=nft_register_chain pid=3183 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:53:06.588000 audit[3183]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=fffff394f840 a2=0 a3=1 items=0 ppid=3104 pid=3183 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.588000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 20 23:53:06.591000 audit[3185]: NETFILTER_CFG table=filter:71 family=2 entries=1 op=nft_register_rule pid=3185 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:53:06.591000 audit[3185]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc5e7e900 a2=0 a3=1 items=0 ppid=3104 pid=3185 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.591000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 20 23:53:06.595000 audit[3188]: NETFILTER_CFG table=filter:72 family=2 entries=1 op=nft_register_rule pid=3188 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:53:06.595000 audit[3188]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc6f38510 a2=0 a3=1 items=0 ppid=3104 pid=3188 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.595000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 20 23:53:06.598000 audit[3191]: NETFILTER_CFG table=filter:73 family=2 entries=1 op=nft_register_rule pid=3191 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:53:06.598000 audit[3191]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffffb875dd0 a2=0 a3=1 items=0 ppid=3104 pid=3191 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.598000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 20 23:53:06.599000 audit[3192]: NETFILTER_CFG table=nat:74 family=2 entries=1 op=nft_register_chain pid=3192 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:53:06.599000 audit[3192]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=ffffe2cc9660 a2=0 a3=1 items=0 ppid=3104 pid=3192 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.599000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 20 23:53:06.602000 audit[3194]: NETFILTER_CFG table=nat:75 family=2 entries=1 op=nft_register_rule pid=3194 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:53:06.602000 audit[3194]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=ffffdd0ef7f0 a2=0 a3=1 items=0 ppid=3104 pid=3194 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.602000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 23:53:06.606000 audit[3197]: NETFILTER_CFG table=nat:76 family=2 entries=1 op=nft_register_rule pid=3197 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:53:06.606000 audit[3197]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=fffffb9fa010 a2=0 a3=1 items=0 ppid=3104 pid=3197 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.606000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 23:53:06.607000 audit[3198]: NETFILTER_CFG table=nat:77 family=2 entries=1 op=nft_register_chain pid=3198 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:53:06.607000 audit[3198]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc31d1310 a2=0 a3=1 items=0 ppid=3104 pid=3198 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.607000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 20 23:53:06.609000 audit[3200]: NETFILTER_CFG table=nat:78 family=2 entries=1 op=nft_register_rule pid=3200 subj=system_u:system_r:kernel_t:s0 comm="iptables" Jan 20 23:53:06.609000 audit[3200]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=532 a0=3 a1=ffffe3df6000 a2=0 a3=1 items=0 ppid=3104 pid=3200 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.609000 audit: PROCTITLE proctitle=69707461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 20 23:53:06.637000 audit[3207]: NETFILTER_CFG table=filter:79 family=2 entries=8 op=nft_register_rule pid=3207 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:06.637000 audit[3207]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffe5529a80 a2=0 a3=1 items=0 ppid=3104 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.637000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:06.647000 audit[3207]: NETFILTER_CFG table=nat:80 family=2 entries=14 op=nft_register_chain pid=3207 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:06.647000 audit[3207]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5508 a0=3 a1=ffffe5529a80 a2=0 a3=1 items=0 ppid=3104 pid=3207 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.647000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:06.648000 audit[3212]: NETFILTER_CFG table=filter:81 family=10 entries=1 op=nft_register_chain pid=3212 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:53:06.648000 audit[3212]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=108 a0=3 a1=ffffecf02f10 a2=0 a3=1 items=0 ppid=3104 pid=3212 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.648000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D45585445524E414C2D5345525649434553002D740066696C746572 Jan 20 23:53:06.651000 audit[3214]: NETFILTER_CFG table=filter:82 family=10 entries=2 op=nft_register_chain pid=3214 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:53:06.651000 audit[3214]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=836 a0=3 a1=ffffc7d25340 a2=0 a3=1 items=0 ppid=3104 pid=3214 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.651000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C6520736572766963 Jan 20 23:53:06.655000 audit[3217]: NETFILTER_CFG table=filter:83 family=10 entries=1 op=nft_register_rule pid=3217 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:53:06.655000 audit[3217]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=752 a0=3 a1=fffff69ebac0 a2=0 a3=1 items=0 ppid=3104 pid=3217 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.655000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E657465732065787465726E616C6C792D76697369626C652073657276 Jan 20 23:53:06.656000 audit[3218]: NETFILTER_CFG table=filter:84 family=10 entries=1 op=nft_register_chain pid=3218 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:53:06.656000 audit[3218]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=fffff45ae460 a2=0 a3=1 items=0 ppid=3104 pid=3218 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.656000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4E4F4445504F525453002D740066696C746572 Jan 20 23:53:06.659000 audit[3220]: NETFILTER_CFG table=filter:85 family=10 entries=1 op=nft_register_rule pid=3220 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:53:06.659000 audit[3220]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffd07cf9e0 a2=0 a3=1 items=0 ppid=3104 pid=3220 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.659000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206865616C746820636865636B207365727669636520706F727473002D6A004B5542452D4E4F4445504F525453 Jan 20 23:53:06.660000 audit[3221]: NETFILTER_CFG table=filter:86 family=10 entries=1 op=nft_register_chain pid=3221 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:53:06.660000 audit[3221]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffc31a0af0 a2=0 a3=1 items=0 ppid=3104 pid=3221 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.660000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D740066696C746572 Jan 20 23:53:06.663000 audit[3223]: NETFILTER_CFG table=filter:87 family=10 entries=1 op=nft_register_rule pid=3223 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:53:06.663000 audit[3223]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=744 a0=3 a1=ffffff6ba360 a2=0 a3=1 items=0 ppid=3104 pid=3223 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.663000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B554245 Jan 20 23:53:06.667000 audit[3226]: NETFILTER_CFG table=filter:88 family=10 entries=2 op=nft_register_chain pid=3226 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:53:06.667000 audit[3226]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=828 a0=3 a1=ffffe1f63190 a2=0 a3=1 items=0 ppid=3104 pid=3226 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.667000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D Jan 20 23:53:06.668000 audit[3227]: NETFILTER_CFG table=filter:89 family=10 entries=1 op=nft_register_chain pid=3227 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:53:06.668000 audit[3227]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffdc1e4dc0 a2=0 a3=1 items=0 ppid=3104 pid=3227 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.668000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D464F5257415244002D740066696C746572 Jan 20 23:53:06.670000 audit[3229]: NETFILTER_CFG table=filter:90 family=10 entries=1 op=nft_register_rule pid=3229 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:53:06.670000 audit[3229]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffce548dd0 a2=0 a3=1 items=0 ppid=3104 pid=3229 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.670000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320666F7277617264696E672072756C6573002D6A004B5542452D464F5257415244 Jan 20 23:53:06.672000 audit[3230]: NETFILTER_CFG table=filter:91 family=10 entries=1 op=nft_register_chain pid=3230 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:53:06.672000 audit[3230]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=104 a0=3 a1=ffffdd36d0f0 a2=0 a3=1 items=0 ppid=3104 pid=3230 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.672000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D50524F58592D4649524557414C4C002D740066696C746572 Jan 20 23:53:06.674000 audit[3232]: NETFILTER_CFG table=filter:92 family=10 entries=1 op=nft_register_rule pid=3232 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:53:06.674000 audit[3232]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=ffffc89ce420 a2=0 a3=1 items=0 ppid=3104 pid=3232 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.674000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D6A Jan 20 23:53:06.678000 audit[3235]: NETFILTER_CFG table=filter:93 family=10 entries=1 op=nft_register_rule pid=3235 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:53:06.678000 audit[3235]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff8ba0650 a2=0 a3=1 items=0 ppid=3104 pid=3235 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.678000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C002D Jan 20 23:53:06.682000 audit[3238]: NETFILTER_CFG table=filter:94 family=10 entries=1 op=nft_register_rule pid=3238 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:53:06.682000 audit[3238]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=748 a0=3 a1=fffff49f2460 a2=0 a3=1 items=0 ppid=3104 pid=3238 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.682000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900464F5257415244002D740066696C746572002D6D00636F6E6E747261636B002D2D63747374617465004E4557002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573206C6F61642062616C616E636572206669726577616C6C Jan 20 23:53:06.683000 audit[3239]: NETFILTER_CFG table=nat:95 family=10 entries=1 op=nft_register_chain pid=3239 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:53:06.683000 audit[3239]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=96 a0=3 a1=fffffc76b620 a2=0 a3=1 items=0 ppid=3104 pid=3239 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.683000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D5345525649434553002D74006E6174 Jan 20 23:53:06.686000 audit[3241]: NETFILTER_CFG table=nat:96 family=10 entries=1 op=nft_register_rule pid=3241 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:53:06.686000 audit[3241]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=524 a0=3 a1=fffffc984600 a2=0 a3=1 items=0 ppid=3104 pid=3241 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.686000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 23:53:06.689000 audit[3244]: NETFILTER_CFG table=nat:97 family=10 entries=1 op=nft_register_rule pid=3244 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:53:06.689000 audit[3244]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=528 a0=3 a1=ffffca9d7e80 a2=0 a3=1 items=0 ppid=3104 pid=3244 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.689000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900505245524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E65746573207365727669636520706F7274616C73002D6A004B5542452D5345525649434553 Jan 20 23:53:06.691000 audit[3245]: NETFILTER_CFG table=nat:98 family=10 entries=1 op=nft_register_chain pid=3245 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:53:06.691000 audit[3245]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffec81cbd0 a2=0 a3=1 items=0 ppid=3104 pid=3245 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.691000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D504F5354524F5554494E47002D74006E6174 Jan 20 23:53:06.693000 audit[3247]: NETFILTER_CFG table=nat:99 family=10 entries=2 op=nft_register_chain pid=3247 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:53:06.693000 audit[3247]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=612 a0=3 a1=fffffa222c90 a2=0 a3=1 items=0 ppid=3104 pid=3247 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.693000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900504F5354524F5554494E47002D74006E6174002D6D00636F6D6D656E74002D2D636F6D6D656E74006B756265726E6574657320706F7374726F7574696E672072756C6573002D6A004B5542452D504F5354524F5554494E47 Jan 20 23:53:06.695000 audit[3248]: NETFILTER_CFG table=filter:100 family=10 entries=1 op=nft_register_chain pid=3248 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:53:06.695000 audit[3248]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=100 a0=3 a1=ffffd4ec8700 a2=0 a3=1 items=0 ppid=3104 pid=3248 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.695000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4E004B5542452D4649524557414C4C002D740066696C746572 Jan 20 23:53:06.697000 audit[3250]: NETFILTER_CFG table=filter:101 family=10 entries=1 op=nft_register_rule pid=3250 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:53:06.697000 audit[3250]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffde325130 a2=0 a3=1 items=0 ppid=3104 pid=3250 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.697000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D4900494E505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 23:53:06.701000 audit[3253]: NETFILTER_CFG table=filter:102 family=10 entries=1 op=nft_register_rule pid=3253 subj=system_u:system_r:kernel_t:s0 comm="ip6tables" Jan 20 23:53:06.701000 audit[3253]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=228 a0=3 a1=ffffc9836050 a2=0 a3=1 items=0 ppid=3104 pid=3253 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.701000 audit: PROCTITLE proctitle=6970367461626C6573002D770035002D5700313030303030002D49004F5554505554002D740066696C746572002D6A004B5542452D4649524557414C4C Jan 20 23:53:06.704000 audit[3255]: NETFILTER_CFG table=filter:103 family=10 entries=3 op=nft_register_rule pid=3255 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 20 23:53:06.704000 audit[3255]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2088 a0=3 a1=fffff2e2c090 a2=0 a3=1 items=0 ppid=3104 pid=3255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.704000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:06.704000 audit[3255]: NETFILTER_CFG table=nat:104 family=10 entries=7 op=nft_register_chain pid=3255 subj=system_u:system_r:kernel_t:s0 comm="ip6tables-resto" Jan 20 23:53:06.704000 audit[3255]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2056 a0=3 a1=fffff2e2c090 a2=0 a3=1 items=0 ppid=3104 pid=3255 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="ip6tables-resto" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:06.704000 audit: PROCTITLE proctitle=6970367461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:07.185821 kubelet[2940]: I0120 23:53:07.185749 2940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/kube-proxy-k8xv5" podStartSLOduration=2.185733462 podStartE2EDuration="2.185733462s" podCreationTimestamp="2026-01-20 23:53:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 23:53:07.174004868 +0000 UTC m=+7.148124003" watchObservedRunningTime="2026-01-20 23:53:07.185733462 +0000 UTC m=+7.159852557" Jan 20 23:53:07.902702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount639509434.mount: Deactivated successfully. Jan 20 23:53:08.209100 containerd[1670]: time="2026-01-20T23:53:08.209047827Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator:v1.38.7\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:53:08.210155 containerd[1670]: time="2026-01-20T23:53:08.210097790Z" level=info msg="stop pulling image quay.io/tigera/operator:v1.38.7: active requests=0, bytes read=20773434" Jan 20 23:53:08.211488 containerd[1670]: time="2026-01-20T23:53:08.211305274Z" level=info msg="ImageCreate event name:\"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:53:08.213582 containerd[1670]: time="2026-01-20T23:53:08.213547160Z" level=info msg="ImageCreate event name:\"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:53:08.214286 containerd[1670]: time="2026-01-20T23:53:08.214255682Z" level=info msg="Pulled image \"quay.io/tigera/operator:v1.38.7\" with image id \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\", repo tag \"quay.io/tigera/operator:v1.38.7\", repo digest \"quay.io/tigera/operator@sha256:1b629a1403f5b6d7243f7dd523d04b8a50352a33c1d4d6970b6002a8733acf2e\", size \"22147999\" in 2.005971697s" Jan 20 23:53:08.214334 containerd[1670]: time="2026-01-20T23:53:08.214291362Z" level=info msg="PullImage \"quay.io/tigera/operator:v1.38.7\" returns image reference \"sha256:19f52e4b7ea471a91d4186e9701288b905145dc20d4928cbbf2eac8d9dfce54b\"" Jan 20 23:53:08.219677 containerd[1670]: time="2026-01-20T23:53:08.219591977Z" level=info msg="CreateContainer within sandbox \"10b4f8513b33e9c74e3ba307aa93cc843121721b3a87ede22fcac5ce1d6f6a0f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:0,}" Jan 20 23:53:08.229908 containerd[1670]: time="2026-01-20T23:53:08.229349245Z" level=info msg="Container cf9770ba8065ddc3f14caa7f069636d40cd69798a9f8d13f93b556b9dab42c86: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:53:08.235722 containerd[1670]: time="2026-01-20T23:53:08.235685623Z" level=info msg="CreateContainer within sandbox \"10b4f8513b33e9c74e3ba307aa93cc843121721b3a87ede22fcac5ce1d6f6a0f\" for &ContainerMetadata{Name:tigera-operator,Attempt:0,} returns container id \"cf9770ba8065ddc3f14caa7f069636d40cd69798a9f8d13f93b556b9dab42c86\"" Jan 20 23:53:08.236215 containerd[1670]: time="2026-01-20T23:53:08.236175545Z" level=info msg="StartContainer for \"cf9770ba8065ddc3f14caa7f069636d40cd69798a9f8d13f93b556b9dab42c86\"" Jan 20 23:53:08.237023 containerd[1670]: time="2026-01-20T23:53:08.236996987Z" level=info msg="connecting to shim cf9770ba8065ddc3f14caa7f069636d40cd69798a9f8d13f93b556b9dab42c86" address="unix:///run/containerd/s/95ef60629d42e62855d3d23a825537ea6905bede6ea3b7f9e5c34c24d3a72036" protocol=ttrpc version=3 Jan 20 23:53:08.262674 systemd[1]: Started cri-containerd-cf9770ba8065ddc3f14caa7f069636d40cd69798a9f8d13f93b556b9dab42c86.scope - libcontainer container cf9770ba8065ddc3f14caa7f069636d40cd69798a9f8d13f93b556b9dab42c86. Jan 20 23:53:08.272000 audit: BPF prog-id=146 op=LOAD Jan 20 23:53:08.272000 audit: BPF prog-id=147 op=LOAD Jan 20 23:53:08.272000 audit[3265]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3046 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:08.272000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366393737306261383036356464633366313463616137663036393633 Jan 20 23:53:08.272000 audit: BPF prog-id=147 op=UNLOAD Jan 20 23:53:08.272000 audit[3265]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3046 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:08.272000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366393737306261383036356464633366313463616137663036393633 Jan 20 23:53:08.273000 audit: BPF prog-id=148 op=LOAD Jan 20 23:53:08.273000 audit[3265]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3046 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:08.273000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366393737306261383036356464633366313463616137663036393633 Jan 20 23:53:08.273000 audit: BPF prog-id=149 op=LOAD Jan 20 23:53:08.273000 audit[3265]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3046 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:08.273000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366393737306261383036356464633366313463616137663036393633 Jan 20 23:53:08.273000 audit: BPF prog-id=149 op=UNLOAD Jan 20 23:53:08.273000 audit[3265]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3046 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:08.273000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366393737306261383036356464633366313463616137663036393633 Jan 20 23:53:08.273000 audit: BPF prog-id=148 op=UNLOAD Jan 20 23:53:08.273000 audit[3265]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3046 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:08.273000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366393737306261383036356464633366313463616137663036393633 Jan 20 23:53:08.273000 audit: BPF prog-id=150 op=LOAD Jan 20 23:53:08.273000 audit[3265]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3046 pid=3265 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:08.273000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6366393737306261383036356464633366313463616137663036393633 Jan 20 23:53:08.287758 containerd[1670]: time="2026-01-20T23:53:08.287389251Z" level=info msg="StartContainer for \"cf9770ba8065ddc3f14caa7f069636d40cd69798a9f8d13f93b556b9dab42c86\" returns successfully" Jan 20 23:53:09.178183 kubelet[2940]: I0120 23:53:09.178120 2940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="tigera-operator/tigera-operator-7dcd859c48-fdt5g" podStartSLOduration=2.170877016 podStartE2EDuration="4.178103797s" podCreationTimestamp="2026-01-20 23:53:05 +0000 UTC" firstStartedPulling="2026-01-20 23:53:06.207788463 +0000 UTC m=+6.181907478" lastFinishedPulling="2026-01-20 23:53:08.215015204 +0000 UTC m=+8.189134259" observedRunningTime="2026-01-20 23:53:09.177639715 +0000 UTC m=+9.151758770" watchObservedRunningTime="2026-01-20 23:53:09.178103797 +0000 UTC m=+9.152222852" Jan 20 23:53:13.586941 sudo[1966]: pam_unix(sudo:session): session closed for user root Jan 20 23:53:13.586000 audit[1966]: USER_END pid=1966 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 23:53:13.591288 kernel: kauditd_printk_skb: 224 callbacks suppressed Jan 20 23:53:13.591352 kernel: audit: type=1106 audit(1768953193.586:518): pid=1966 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_limits,pam_env,pam_umask,pam_unix acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 23:53:13.586000 audit[1966]: CRED_DISP pid=1966 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 23:53:13.596350 kernel: audit: type=1104 audit(1768953193.586:519): pid=1966 uid=500 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="root" exe="/usr/bin/sudo" hostname=? addr=? terminal=? res=success' Jan 20 23:53:13.686212 sshd[1965]: Connection closed by 20.161.92.111 port 52042 Jan 20 23:53:13.689089 sshd-session[1961]: pam_unix(sshd:session): session closed for user core Jan 20 23:53:13.690000 audit[1961]: USER_END pid=1961 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:53:13.695420 systemd[1]: sshd@8-10.0.2.209:22-20.161.92.111:52042.service: Deactivated successfully. Jan 20 23:53:13.697418 systemd[1]: session-10.scope: Deactivated successfully. Jan 20 23:53:13.690000 audit[1961]: CRED_DISP pid=1961 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:53:13.700488 kernel: audit: type=1106 audit(1768953193.690:520): pid=1961 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:53:13.700567 kernel: audit: type=1104 audit(1768953193.690:521): pid=1961 uid=0 auid=500 ses=10 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:53:13.695000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.2.209:22-20.161.92.111:52042 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:13.704290 kernel: audit: type=1131 audit(1768953193.695:522): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@8-10.0.2.209:22-20.161.92.111:52042 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:53:13.703911 systemd[1]: session-10.scope: Consumed 8.705s CPU time, 222.7M memory peak. Jan 20 23:53:13.705077 systemd-logind[1651]: Session 10 logged out. Waiting for processes to exit. Jan 20 23:53:13.707007 systemd-logind[1651]: Removed session 10. Jan 20 23:53:15.520000 audit[3355]: NETFILTER_CFG table=filter:105 family=2 entries=15 op=nft_register_rule pid=3355 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:15.525500 kernel: audit: type=1325 audit(1768953195.520:523): table=filter:105 family=2 entries=15 op=nft_register_rule pid=3355 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:15.520000 audit[3355]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffdba9e5e0 a2=0 a3=1 items=0 ppid=3104 pid=3355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:15.520000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:15.540384 kernel: audit: type=1300 audit(1768953195.520:523): arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffdba9e5e0 a2=0 a3=1 items=0 ppid=3104 pid=3355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:15.540442 kernel: audit: type=1327 audit(1768953195.520:523): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:15.541000 audit[3355]: NETFILTER_CFG table=nat:106 family=2 entries=12 op=nft_register_rule pid=3355 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:15.541000 audit[3355]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdba9e5e0 a2=0 a3=1 items=0 ppid=3104 pid=3355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:15.548973 kernel: audit: type=1325 audit(1768953195.541:524): table=nat:106 family=2 entries=12 op=nft_register_rule pid=3355 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:15.549027 kernel: audit: type=1300 audit(1768953195.541:524): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdba9e5e0 a2=0 a3=1 items=0 ppid=3104 pid=3355 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:15.541000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:15.554000 audit[3357]: NETFILTER_CFG table=filter:107 family=2 entries=16 op=nft_register_rule pid=3357 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:15.554000 audit[3357]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffc7c303e0 a2=0 a3=1 items=0 ppid=3104 pid=3357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:15.554000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:15.562000 audit[3357]: NETFILTER_CFG table=nat:108 family=2 entries=12 op=nft_register_rule pid=3357 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:15.562000 audit[3357]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffc7c303e0 a2=0 a3=1 items=0 ppid=3104 pid=3357 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:15.562000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:19.865000 audit[3359]: NETFILTER_CFG table=filter:109 family=2 entries=17 op=nft_register_rule pid=3359 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:19.868935 kernel: kauditd_printk_skb: 7 callbacks suppressed Jan 20 23:53:19.869052 kernel: audit: type=1325 audit(1768953199.865:527): table=filter:109 family=2 entries=17 op=nft_register_rule pid=3359 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:19.865000 audit[3359]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff64eb5d0 a2=0 a3=1 items=0 ppid=3104 pid=3359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:19.873019 kernel: audit: type=1300 audit(1768953199.865:527): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=fffff64eb5d0 a2=0 a3=1 items=0 ppid=3104 pid=3359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:19.865000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:19.875619 kernel: audit: type=1327 audit(1768953199.865:527): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:19.875000 audit[3359]: NETFILTER_CFG table=nat:110 family=2 entries=12 op=nft_register_rule pid=3359 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:19.878177 kernel: audit: type=1325 audit(1768953199.875:528): table=nat:110 family=2 entries=12 op=nft_register_rule pid=3359 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:19.875000 audit[3359]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff64eb5d0 a2=0 a3=1 items=0 ppid=3104 pid=3359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:19.875000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:19.887938 kernel: audit: type=1300 audit(1768953199.875:528): arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff64eb5d0 a2=0 a3=1 items=0 ppid=3104 pid=3359 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:19.888067 kernel: audit: type=1327 audit(1768953199.875:528): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:19.895000 audit[3361]: NETFILTER_CFG table=filter:111 family=2 entries=18 op=nft_register_rule pid=3361 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:19.895000 audit[3361]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffceff0920 a2=0 a3=1 items=0 ppid=3104 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:19.902189 kernel: audit: type=1325 audit(1768953199.895:529): table=filter:111 family=2 entries=18 op=nft_register_rule pid=3361 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:19.902271 kernel: audit: type=1300 audit(1768953199.895:529): arch=c00000b7 syscall=211 success=yes exit=6736 a0=3 a1=ffffceff0920 a2=0 a3=1 items=0 ppid=3104 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:19.895000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:19.904144 kernel: audit: type=1327 audit(1768953199.895:529): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:19.902000 audit[3361]: NETFILTER_CFG table=nat:112 family=2 entries=12 op=nft_register_rule pid=3361 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:19.906031 kernel: audit: type=1325 audit(1768953199.902:530): table=nat:112 family=2 entries=12 op=nft_register_rule pid=3361 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:19.902000 audit[3361]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffceff0920 a2=0 a3=1 items=0 ppid=3104 pid=3361 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:19.902000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:20.915000 audit[3363]: NETFILTER_CFG table=filter:113 family=2 entries=19 op=nft_register_rule pid=3363 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:20.915000 audit[3363]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=7480 a0=3 a1=fffff4059ba0 a2=0 a3=1 items=0 ppid=3104 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:20.915000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:20.922000 audit[3363]: NETFILTER_CFG table=nat:114 family=2 entries=12 op=nft_register_rule pid=3363 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:20.922000 audit[3363]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=fffff4059ba0 a2=0 a3=1 items=0 ppid=3104 pid=3363 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:20.922000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:22.348824 systemd[1]: Created slice kubepods-besteffort-pode9babab8_a410_4c04_9b5f_c467c16d2dfb.slice - libcontainer container kubepods-besteffort-pode9babab8_a410_4c04_9b5f_c467c16d2dfb.slice. Jan 20 23:53:22.355902 kubelet[2940]: I0120 23:53:22.355840 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"typha-certs\" (UniqueName: \"kubernetes.io/secret/e9babab8-a410-4c04-9b5f-c467c16d2dfb-typha-certs\") pod \"calico-typha-9dbb5bc59-8vfds\" (UID: \"e9babab8-a410-4c04-9b5f-c467c16d2dfb\") " pod="calico-system/calico-typha-9dbb5bc59-8vfds" Jan 20 23:53:22.355902 kubelet[2940]: I0120 23:53:22.355889 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ffw5n\" (UniqueName: \"kubernetes.io/projected/e9babab8-a410-4c04-9b5f-c467c16d2dfb-kube-api-access-ffw5n\") pod \"calico-typha-9dbb5bc59-8vfds\" (UID: \"e9babab8-a410-4c04-9b5f-c467c16d2dfb\") " pod="calico-system/calico-typha-9dbb5bc59-8vfds" Jan 20 23:53:22.355902 kubelet[2940]: I0120 23:53:22.355914 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/e9babab8-a410-4c04-9b5f-c467c16d2dfb-tigera-ca-bundle\") pod \"calico-typha-9dbb5bc59-8vfds\" (UID: \"e9babab8-a410-4c04-9b5f-c467c16d2dfb\") " pod="calico-system/calico-typha-9dbb5bc59-8vfds" Jan 20 23:53:22.373000 audit[3365]: NETFILTER_CFG table=filter:115 family=2 entries=21 op=nft_register_rule pid=3365 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:22.373000 audit[3365]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffdd42ec60 a2=0 a3=1 items=0 ppid=3104 pid=3365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:22.373000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:22.380000 audit[3365]: NETFILTER_CFG table=nat:116 family=2 entries=12 op=nft_register_rule pid=3365 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:22.380000 audit[3365]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdd42ec60 a2=0 a3=1 items=0 ppid=3104 pid=3365 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:22.380000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:22.578048 systemd[1]: Created slice kubepods-besteffort-podfc775c7b_978e_4f4e_b8be_7a5870130c3e.slice - libcontainer container kubepods-besteffort-podfc775c7b_978e_4f4e_b8be_7a5870130c3e.slice. Jan 20 23:53:22.652173 containerd[1670]: time="2026-01-20T23:53:22.652048677Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9dbb5bc59-8vfds,Uid:e9babab8-a410-4c04-9b5f-c467c16d2dfb,Namespace:calico-system,Attempt:0,}" Jan 20 23:53:22.657312 kubelet[2940]: I0120 23:53:22.657283 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-net-dir\" (UniqueName: \"kubernetes.io/host-path/fc775c7b-978e-4f4e-b8be-7a5870130c3e-cni-net-dir\") pod \"calico-node-g5whf\" (UID: \"fc775c7b-978e-4f4e-b8be-7a5870130c3e\") " pod="calico-system/calico-node-g5whf" Jan 20 23:53:22.657312 kubelet[2940]: I0120 23:53:22.657322 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"policysync\" (UniqueName: \"kubernetes.io/host-path/fc775c7b-978e-4f4e-b8be-7a5870130c3e-policysync\") pod \"calico-node-g5whf\" (UID: \"fc775c7b-978e-4f4e-b8be-7a5870130c3e\") " pod="calico-system/calico-node-g5whf" Jan 20 23:53:22.657312 kubelet[2940]: I0120 23:53:22.657340 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5sct9\" (UniqueName: \"kubernetes.io/projected/fc775c7b-978e-4f4e-b8be-7a5870130c3e-kube-api-access-5sct9\") pod \"calico-node-g5whf\" (UID: \"fc775c7b-978e-4f4e-b8be-7a5870130c3e\") " pod="calico-system/calico-node-g5whf" Jan 20 23:53:22.657625 kubelet[2940]: I0120 23:53:22.657441 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-log-dir\" (UniqueName: \"kubernetes.io/host-path/fc775c7b-978e-4f4e-b8be-7a5870130c3e-cni-log-dir\") pod \"calico-node-g5whf\" (UID: \"fc775c7b-978e-4f4e-b8be-7a5870130c3e\") " pod="calico-system/calico-node-g5whf" Jan 20 23:53:22.657625 kubelet[2940]: I0120 23:53:22.657489 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"flexvol-driver-host\" (UniqueName: \"kubernetes.io/host-path/fc775c7b-978e-4f4e-b8be-7a5870130c3e-flexvol-driver-host\") pod \"calico-node-g5whf\" (UID: \"fc775c7b-978e-4f4e-b8be-7a5870130c3e\") " pod="calico-system/calico-node-g5whf" Jan 20 23:53:22.657625 kubelet[2940]: I0120 23:53:22.657535 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-bin-dir\" (UniqueName: \"kubernetes.io/host-path/fc775c7b-978e-4f4e-b8be-7a5870130c3e-cni-bin-dir\") pod \"calico-node-g5whf\" (UID: \"fc775c7b-978e-4f4e-b8be-7a5870130c3e\") " pod="calico-system/calico-node-g5whf" Jan 20 23:53:22.657625 kubelet[2940]: I0120 23:53:22.657556 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-certs\" (UniqueName: \"kubernetes.io/secret/fc775c7b-978e-4f4e-b8be-7a5870130c3e-node-certs\") pod \"calico-node-g5whf\" (UID: \"fc775c7b-978e-4f4e-b8be-7a5870130c3e\") " pod="calico-system/calico-node-g5whf" Jan 20 23:53:22.657625 kubelet[2940]: I0120 23:53:22.657573 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"lib-modules\" (UniqueName: \"kubernetes.io/host-path/fc775c7b-978e-4f4e-b8be-7a5870130c3e-lib-modules\") pod \"calico-node-g5whf\" (UID: \"fc775c7b-978e-4f4e-b8be-7a5870130c3e\") " pod="calico-system/calico-node-g5whf" Jan 20 23:53:22.657742 kubelet[2940]: I0120 23:53:22.657587 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-calico\" (UniqueName: \"kubernetes.io/host-path/fc775c7b-978e-4f4e-b8be-7a5870130c3e-var-lib-calico\") pod \"calico-node-g5whf\" (UID: \"fc775c7b-978e-4f4e-b8be-7a5870130c3e\") " pod="calico-system/calico-node-g5whf" Jan 20 23:53:22.657742 kubelet[2940]: I0120 23:53:22.657600 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-run-calico\" (UniqueName: \"kubernetes.io/host-path/fc775c7b-978e-4f4e-b8be-7a5870130c3e-var-run-calico\") pod \"calico-node-g5whf\" (UID: \"fc775c7b-978e-4f4e-b8be-7a5870130c3e\") " pod="calico-system/calico-node-g5whf" Jan 20 23:53:22.657742 kubelet[2940]: I0120 23:53:22.657614 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"xtables-lock\" (UniqueName: \"kubernetes.io/host-path/fc775c7b-978e-4f4e-b8be-7a5870130c3e-xtables-lock\") pod \"calico-node-g5whf\" (UID: \"fc775c7b-978e-4f4e-b8be-7a5870130c3e\") " pod="calico-system/calico-node-g5whf" Jan 20 23:53:22.657742 kubelet[2940]: I0120 23:53:22.657649 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/fc775c7b-978e-4f4e-b8be-7a5870130c3e-tigera-ca-bundle\") pod \"calico-node-g5whf\" (UID: \"fc775c7b-978e-4f4e-b8be-7a5870130c3e\") " pod="calico-system/calico-node-g5whf" Jan 20 23:53:22.677432 containerd[1670]: time="2026-01-20T23:53:22.677382229Z" level=info msg="connecting to shim e74d5b4b7d42354725da4d1dcae25f4605dd623007d919baa2aec21c5b82a90e" address="unix:///run/containerd/s/d8a55428b441b614144776d8b3d18ee2224c73e08d69d0105c7d0a9dc23e84df" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:53:22.703994 systemd[1]: Started cri-containerd-e74d5b4b7d42354725da4d1dcae25f4605dd623007d919baa2aec21c5b82a90e.scope - libcontainer container e74d5b4b7d42354725da4d1dcae25f4605dd623007d919baa2aec21c5b82a90e. Jan 20 23:53:22.713000 audit: BPF prog-id=151 op=LOAD Jan 20 23:53:22.713000 audit: BPF prog-id=152 op=LOAD Jan 20 23:53:22.713000 audit[3389]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3376 pid=3389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:22.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537346435623462376434323335343732356461346431646361653235 Jan 20 23:53:22.713000 audit: BPF prog-id=152 op=UNLOAD Jan 20 23:53:22.713000 audit[3389]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3376 pid=3389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:22.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537346435623462376434323335343732356461346431646361653235 Jan 20 23:53:22.713000 audit: BPF prog-id=153 op=LOAD Jan 20 23:53:22.713000 audit[3389]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3376 pid=3389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:22.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537346435623462376434323335343732356461346431646361653235 Jan 20 23:53:22.713000 audit: BPF prog-id=154 op=LOAD Jan 20 23:53:22.713000 audit[3389]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3376 pid=3389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:22.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537346435623462376434323335343732356461346431646361653235 Jan 20 23:53:22.713000 audit: BPF prog-id=154 op=UNLOAD Jan 20 23:53:22.713000 audit[3389]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3376 pid=3389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:22.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537346435623462376434323335343732356461346431646361653235 Jan 20 23:53:22.713000 audit: BPF prog-id=153 op=UNLOAD Jan 20 23:53:22.713000 audit[3389]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3376 pid=3389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:22.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537346435623462376434323335343732356461346431646361653235 Jan 20 23:53:22.713000 audit: BPF prog-id=155 op=LOAD Jan 20 23:53:22.713000 audit[3389]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3376 pid=3389 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:22.713000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6537346435623462376434323335343732356461346431646361653235 Jan 20 23:53:22.739834 containerd[1670]: time="2026-01-20T23:53:22.739764448Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-typha-9dbb5bc59-8vfds,Uid:e9babab8-a410-4c04-9b5f-c467c16d2dfb,Namespace:calico-system,Attempt:0,} returns sandbox id \"e74d5b4b7d42354725da4d1dcae25f4605dd623007d919baa2aec21c5b82a90e\"" Jan 20 23:53:22.742679 containerd[1670]: time="2026-01-20T23:53:22.742639136Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\"" Jan 20 23:53:22.762207 kubelet[2940]: E0120 23:53:22.762114 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.762207 kubelet[2940]: W0120 23:53:22.762140 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.762207 kubelet[2940]: E0120 23:53:22.762169 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.766612 kubelet[2940]: E0120 23:53:22.765122 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.766612 kubelet[2940]: W0120 23:53:22.765158 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.766612 kubelet[2940]: E0120 23:53:22.765177 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.779540 kubelet[2940]: E0120 23:53:22.779269 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.779540 kubelet[2940]: W0120 23:53:22.779334 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.779540 kubelet[2940]: E0120 23:53:22.779378 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.781964 kubelet[2940]: E0120 23:53:22.781767 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f8rjq" podUID="d9863653-2d98-4479-88c2-8614b7871a32" Jan 20 23:53:22.853561 kubelet[2940]: E0120 23:53:22.853520 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.853561 kubelet[2940]: W0120 23:53:22.853555 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.853711 kubelet[2940]: E0120 23:53:22.853578 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.853736 kubelet[2940]: E0120 23:53:22.853720 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.853780 kubelet[2940]: W0120 23:53:22.853727 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.853806 kubelet[2940]: E0120 23:53:22.853782 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.853956 kubelet[2940]: E0120 23:53:22.853941 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.853988 kubelet[2940]: W0120 23:53:22.853956 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.853988 kubelet[2940]: E0120 23:53:22.853965 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.854126 kubelet[2940]: E0120 23:53:22.854112 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.854126 kubelet[2940]: W0120 23:53:22.854123 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.854174 kubelet[2940]: E0120 23:53:22.854132 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.854298 kubelet[2940]: E0120 23:53:22.854286 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.854324 kubelet[2940]: W0120 23:53:22.854297 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.854324 kubelet[2940]: E0120 23:53:22.854307 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.854785 kubelet[2940]: E0120 23:53:22.854571 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.854785 kubelet[2940]: W0120 23:53:22.854589 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.854785 kubelet[2940]: E0120 23:53:22.854601 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.854785 kubelet[2940]: E0120 23:53:22.854779 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.854785 kubelet[2940]: W0120 23:53:22.854788 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.854935 kubelet[2940]: E0120 23:53:22.854811 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.855482 kubelet[2940]: E0120 23:53:22.855030 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.855482 kubelet[2940]: W0120 23:53:22.855043 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.855482 kubelet[2940]: E0120 23:53:22.855073 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.855589 kubelet[2940]: E0120 23:53:22.855511 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.855589 kubelet[2940]: W0120 23:53:22.855533 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.855589 kubelet[2940]: E0120 23:53:22.855546 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.855806 kubelet[2940]: E0120 23:53:22.855777 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.855806 kubelet[2940]: W0120 23:53:22.855790 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.855806 kubelet[2940]: E0120 23:53:22.855801 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.856216 kubelet[2940]: E0120 23:53:22.856138 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.856216 kubelet[2940]: W0120 23:53:22.856154 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.856303 kubelet[2940]: E0120 23:53:22.856273 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.857117 kubelet[2940]: E0120 23:53:22.856754 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.857117 kubelet[2940]: W0120 23:53:22.856770 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.857117 kubelet[2940]: E0120 23:53:22.856783 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.857117 kubelet[2940]: E0120 23:53:22.857007 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.857117 kubelet[2940]: W0120 23:53:22.857016 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.857117 kubelet[2940]: E0120 23:53:22.857025 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.857294 kubelet[2940]: E0120 23:53:22.857201 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.857294 kubelet[2940]: W0120 23:53:22.857230 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.857294 kubelet[2940]: E0120 23:53:22.857240 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.857978 kubelet[2940]: E0120 23:53:22.857415 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.857978 kubelet[2940]: W0120 23:53:22.857428 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.857978 kubelet[2940]: E0120 23:53:22.857447 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.857978 kubelet[2940]: E0120 23:53:22.857616 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.857978 kubelet[2940]: W0120 23:53:22.857624 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.857978 kubelet[2940]: E0120 23:53:22.857632 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.857978 kubelet[2940]: E0120 23:53:22.857796 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.857978 kubelet[2940]: W0120 23:53:22.857804 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.857978 kubelet[2940]: E0120 23:53:22.857813 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.857978 kubelet[2940]: E0120 23:53:22.857983 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.858232 kubelet[2940]: W0120 23:53:22.857992 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.858232 kubelet[2940]: E0120 23:53:22.858001 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.858232 kubelet[2940]: E0120 23:53:22.858160 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.858232 kubelet[2940]: W0120 23:53:22.858169 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.858232 kubelet[2940]: E0120 23:53:22.858177 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.858379 kubelet[2940]: E0120 23:53:22.858313 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.858379 kubelet[2940]: W0120 23:53:22.858338 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.858379 kubelet[2940]: E0120 23:53:22.858349 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.859920 kubelet[2940]: E0120 23:53:22.859610 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.859920 kubelet[2940]: W0120 23:53:22.859627 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.859920 kubelet[2940]: E0120 23:53:22.859640 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.859920 kubelet[2940]: I0120 23:53:22.859668 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/d9863653-2d98-4479-88c2-8614b7871a32-kubelet-dir\") pod \"csi-node-driver-f8rjq\" (UID: \"d9863653-2d98-4479-88c2-8614b7871a32\") " pod="calico-system/csi-node-driver-f8rjq" Jan 20 23:53:22.859920 kubelet[2940]: E0120 23:53:22.859862 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.859920 kubelet[2940]: W0120 23:53:22.859872 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.859920 kubelet[2940]: E0120 23:53:22.859881 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.859920 kubelet[2940]: I0120 23:53:22.859902 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-td2cf\" (UniqueName: \"kubernetes.io/projected/d9863653-2d98-4479-88c2-8614b7871a32-kube-api-access-td2cf\") pod \"csi-node-driver-f8rjq\" (UID: \"d9863653-2d98-4479-88c2-8614b7871a32\") " pod="calico-system/csi-node-driver-f8rjq" Jan 20 23:53:22.860132 kubelet[2940]: E0120 23:53:22.860100 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.860132 kubelet[2940]: W0120 23:53:22.860114 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.860132 kubelet[2940]: E0120 23:53:22.860126 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.860302 kubelet[2940]: E0120 23:53:22.860285 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.860302 kubelet[2940]: W0120 23:53:22.860298 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.860365 kubelet[2940]: E0120 23:53:22.860308 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.860896 kubelet[2940]: E0120 23:53:22.860482 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.860896 kubelet[2940]: W0120 23:53:22.860497 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.860896 kubelet[2940]: E0120 23:53:22.860506 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.860896 kubelet[2940]: I0120 23:53:22.860527 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"varrun\" (UniqueName: \"kubernetes.io/host-path/d9863653-2d98-4479-88c2-8614b7871a32-varrun\") pod \"csi-node-driver-f8rjq\" (UID: \"d9863653-2d98-4479-88c2-8614b7871a32\") " pod="calico-system/csi-node-driver-f8rjq" Jan 20 23:53:22.860896 kubelet[2940]: E0120 23:53:22.860695 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.860896 kubelet[2940]: W0120 23:53:22.860705 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.860896 kubelet[2940]: E0120 23:53:22.860733 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.860896 kubelet[2940]: I0120 23:53:22.860755 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/d9863653-2d98-4479-88c2-8614b7871a32-registration-dir\") pod \"csi-node-driver-f8rjq\" (UID: \"d9863653-2d98-4479-88c2-8614b7871a32\") " pod="calico-system/csi-node-driver-f8rjq" Jan 20 23:53:22.861188 kubelet[2940]: E0120 23:53:22.861173 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.861369 kubelet[2940]: W0120 23:53:22.861247 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.861369 kubelet[2940]: E0120 23:53:22.861262 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.861511 kubelet[2940]: E0120 23:53:22.861499 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.861570 kubelet[2940]: W0120 23:53:22.861559 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.861619 kubelet[2940]: E0120 23:53:22.861610 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.861918 kubelet[2940]: E0120 23:53:22.861906 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.862233 kubelet[2940]: W0120 23:53:22.862037 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.862233 kubelet[2940]: E0120 23:53:22.862059 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.862395 kubelet[2940]: E0120 23:53:22.862382 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.862472 kubelet[2940]: W0120 23:53:22.862441 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.862526 kubelet[2940]: E0120 23:53:22.862515 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.862735 kubelet[2940]: E0120 23:53:22.862723 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.862802 kubelet[2940]: W0120 23:53:22.862791 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.862857 kubelet[2940]: E0120 23:53:22.862847 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.862964 kubelet[2940]: I0120 23:53:22.862936 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/d9863653-2d98-4479-88c2-8614b7871a32-socket-dir\") pod \"csi-node-driver-f8rjq\" (UID: \"d9863653-2d98-4479-88c2-8614b7871a32\") " pod="calico-system/csi-node-driver-f8rjq" Jan 20 23:53:22.863147 kubelet[2940]: E0120 23:53:22.863135 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.863208 kubelet[2940]: W0120 23:53:22.863198 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.863261 kubelet[2940]: E0120 23:53:22.863251 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.863521 kubelet[2940]: E0120 23:53:22.863508 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.863592 kubelet[2940]: W0120 23:53:22.863580 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.863650 kubelet[2940]: E0120 23:53:22.863640 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.863952 kubelet[2940]: E0120 23:53:22.863937 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.864126 kubelet[2940]: W0120 23:53:22.864024 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.864126 kubelet[2940]: E0120 23:53:22.864042 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.864501 kubelet[2940]: E0120 23:53:22.864484 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.864610 kubelet[2940]: W0120 23:53:22.864575 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.864610 kubelet[2940]: E0120 23:53:22.864591 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.883404 containerd[1670]: time="2026-01-20T23:53:22.883334298Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g5whf,Uid:fc775c7b-978e-4f4e-b8be-7a5870130c3e,Namespace:calico-system,Attempt:0,}" Jan 20 23:53:22.912268 containerd[1670]: time="2026-01-20T23:53:22.912118380Z" level=info msg="connecting to shim 4027cab4aed8054d20e54623718e42115799d3ecd7d944db47db30c2ded34d1c" address="unix:///run/containerd/s/f694d37f03e5a43773052019e2402347282d310dce1b9158b9ee44e3ecc4bc72" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:53:22.934712 systemd[1]: Started cri-containerd-4027cab4aed8054d20e54623718e42115799d3ecd7d944db47db30c2ded34d1c.scope - libcontainer container 4027cab4aed8054d20e54623718e42115799d3ecd7d944db47db30c2ded34d1c. Jan 20 23:53:22.943000 audit: BPF prog-id=156 op=LOAD Jan 20 23:53:22.943000 audit: BPF prog-id=157 op=LOAD Jan 20 23:53:22.943000 audit[3486]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3474 pid=3486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:22.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430323763616234616564383035346432306535343632333731386534 Jan 20 23:53:22.943000 audit: BPF prog-id=157 op=UNLOAD Jan 20 23:53:22.943000 audit[3486]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3474 pid=3486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:22.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430323763616234616564383035346432306535343632333731386534 Jan 20 23:53:22.943000 audit: BPF prog-id=158 op=LOAD Jan 20 23:53:22.943000 audit[3486]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3474 pid=3486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:22.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430323763616234616564383035346432306535343632333731386534 Jan 20 23:53:22.943000 audit: BPF prog-id=159 op=LOAD Jan 20 23:53:22.943000 audit[3486]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3474 pid=3486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:22.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430323763616234616564383035346432306535343632333731386534 Jan 20 23:53:22.943000 audit: BPF prog-id=159 op=UNLOAD Jan 20 23:53:22.943000 audit[3486]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3474 pid=3486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:22.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430323763616234616564383035346432306535343632333731386534 Jan 20 23:53:22.943000 audit: BPF prog-id=158 op=UNLOAD Jan 20 23:53:22.943000 audit[3486]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3474 pid=3486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:22.943000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430323763616234616564383035346432306535343632333731386534 Jan 20 23:53:22.944000 audit: BPF prog-id=160 op=LOAD Jan 20 23:53:22.944000 audit[3486]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3474 pid=3486 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:22.944000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3430323763616234616564383035346432306535343632333731386534 Jan 20 23:53:22.961296 containerd[1670]: time="2026-01-20T23:53:22.961248481Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-node-g5whf,Uid:fc775c7b-978e-4f4e-b8be-7a5870130c3e,Namespace:calico-system,Attempt:0,} returns sandbox id \"4027cab4aed8054d20e54623718e42115799d3ecd7d944db47db30c2ded34d1c\"" Jan 20 23:53:22.964144 kubelet[2940]: E0120 23:53:22.964117 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.964144 kubelet[2940]: W0120 23:53:22.964139 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.964274 kubelet[2940]: E0120 23:53:22.964195 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.964539 kubelet[2940]: E0120 23:53:22.964518 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.964539 kubelet[2940]: W0120 23:53:22.964537 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.964609 kubelet[2940]: E0120 23:53:22.964552 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.964799 kubelet[2940]: E0120 23:53:22.964785 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.964844 kubelet[2940]: W0120 23:53:22.964798 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.964844 kubelet[2940]: E0120 23:53:22.964810 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.965043 kubelet[2940]: E0120 23:53:22.965029 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.965043 kubelet[2940]: W0120 23:53:22.965041 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.965244 kubelet[2940]: E0120 23:53:22.965050 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.965337 kubelet[2940]: E0120 23:53:22.965321 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.965401 kubelet[2940]: W0120 23:53:22.965388 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.965478 kubelet[2940]: E0120 23:53:22.965448 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.965772 kubelet[2940]: E0120 23:53:22.965754 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.965772 kubelet[2940]: W0120 23:53:22.965769 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.965837 kubelet[2940]: E0120 23:53:22.965780 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.965958 kubelet[2940]: E0120 23:53:22.965944 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.965958 kubelet[2940]: W0120 23:53:22.965956 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.966011 kubelet[2940]: E0120 23:53:22.965966 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.966154 kubelet[2940]: E0120 23:53:22.966140 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.966154 kubelet[2940]: W0120 23:53:22.966152 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.966225 kubelet[2940]: E0120 23:53:22.966161 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.966391 kubelet[2940]: E0120 23:53:22.966374 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.966391 kubelet[2940]: W0120 23:53:22.966392 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.966510 kubelet[2940]: E0120 23:53:22.966405 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.966576 kubelet[2940]: E0120 23:53:22.966563 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.966576 kubelet[2940]: W0120 23:53:22.966575 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.966635 kubelet[2940]: E0120 23:53:22.966583 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.966738 kubelet[2940]: E0120 23:53:22.966726 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.966738 kubelet[2940]: W0120 23:53:22.966736 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.966827 kubelet[2940]: E0120 23:53:22.966745 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.967442 kubelet[2940]: E0120 23:53:22.967414 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.967531 kubelet[2940]: W0120 23:53:22.967442 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.967531 kubelet[2940]: E0120 23:53:22.967514 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.967719 kubelet[2940]: E0120 23:53:22.967703 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.967719 kubelet[2940]: W0120 23:53:22.967715 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.967782 kubelet[2940]: E0120 23:53:22.967726 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.967885 kubelet[2940]: E0120 23:53:22.967866 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.967885 kubelet[2940]: W0120 23:53:22.967883 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.968006 kubelet[2940]: E0120 23:53:22.967893 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.968051 kubelet[2940]: E0120 23:53:22.968040 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.968051 kubelet[2940]: W0120 23:53:22.968048 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.968099 kubelet[2940]: E0120 23:53:22.968056 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.968262 kubelet[2940]: E0120 23:53:22.968250 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.968297 kubelet[2940]: W0120 23:53:22.968262 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.968297 kubelet[2940]: E0120 23:53:22.968272 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.968484 kubelet[2940]: E0120 23:53:22.968440 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.968484 kubelet[2940]: W0120 23:53:22.968464 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.968484 kubelet[2940]: E0120 23:53:22.968475 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.968729 kubelet[2940]: E0120 23:53:22.968715 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.968729 kubelet[2940]: W0120 23:53:22.968729 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.968784 kubelet[2940]: E0120 23:53:22.968739 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.969018 kubelet[2940]: E0120 23:53:22.969001 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.969048 kubelet[2940]: W0120 23:53:22.969018 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.969048 kubelet[2940]: E0120 23:53:22.969031 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.969283 kubelet[2940]: E0120 23:53:22.969268 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.969283 kubelet[2940]: W0120 23:53:22.969282 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.969346 kubelet[2940]: E0120 23:53:22.969294 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.969500 kubelet[2940]: E0120 23:53:22.969476 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.969500 kubelet[2940]: W0120 23:53:22.969494 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.969591 kubelet[2940]: E0120 23:53:22.969505 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.969674 kubelet[2940]: E0120 23:53:22.969655 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.969718 kubelet[2940]: W0120 23:53:22.969675 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.969718 kubelet[2940]: E0120 23:53:22.969686 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.969903 kubelet[2940]: E0120 23:53:22.969889 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.969932 kubelet[2940]: W0120 23:53:22.969902 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.969932 kubelet[2940]: E0120 23:53:22.969913 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.970098 kubelet[2940]: E0120 23:53:22.970085 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.970098 kubelet[2940]: W0120 23:53:22.970096 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.970150 kubelet[2940]: E0120 23:53:22.970106 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.970308 kubelet[2940]: E0120 23:53:22.970295 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.970308 kubelet[2940]: W0120 23:53:22.970306 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.970354 kubelet[2940]: E0120 23:53:22.970315 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:22.979056 kubelet[2940]: E0120 23:53:22.979020 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:22.979056 kubelet[2940]: W0120 23:53:22.979042 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:22.979056 kubelet[2940]: E0120 23:53:22.979057 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:23.396000 audit[3541]: NETFILTER_CFG table=filter:117 family=2 entries=22 op=nft_register_rule pid=3541 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:23.396000 audit[3541]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd562e140 a2=0 a3=1 items=0 ppid=3104 pid=3541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:23.396000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:23.402000 audit[3541]: NETFILTER_CFG table=nat:118 family=2 entries=12 op=nft_register_rule pid=3541 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:23.402000 audit[3541]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd562e140 a2=0 a3=1 items=0 ppid=3104 pid=3541 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:23.402000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:24.009702 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount3810906309.mount: Deactivated successfully. Jan 20 23:53:24.126640 kubelet[2940]: E0120 23:53:24.126596 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f8rjq" podUID="d9863653-2d98-4479-88c2-8614b7871a32" Jan 20 23:53:24.406256 containerd[1670]: time="2026-01-20T23:53:24.406144412Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:53:24.407980 containerd[1670]: time="2026-01-20T23:53:24.407929977Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/typha:v3.30.4: active requests=0, bytes read=31716861" Jan 20 23:53:24.409491 containerd[1670]: time="2026-01-20T23:53:24.409421981Z" level=info msg="ImageCreate event name:\"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:53:24.411812 containerd[1670]: time="2026-01-20T23:53:24.411761188Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:53:24.412288 containerd[1670]: time="2026-01-20T23:53:24.412245309Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/typha:v3.30.4\" with image id \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\", repo tag \"ghcr.io/flatcar/calico/typha:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/typha@sha256:6f437220b5b3c627fb4a0fc8dc323363101f3c22a8f337612c2a1ddfb73b810c\", size \"33090541\" in 1.669567653s" Jan 20 23:53:24.412288 containerd[1670]: time="2026-01-20T23:53:24.412283029Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/typha:v3.30.4\" returns image reference \"sha256:5fe38d12a54098df5aaf5ec7228dc2f976f60cb4f434d7256f03126b004fdc5b\"" Jan 20 23:53:24.414280 containerd[1670]: time="2026-01-20T23:53:24.414098995Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\"" Jan 20 23:53:24.423334 containerd[1670]: time="2026-01-20T23:53:24.423293141Z" level=info msg="CreateContainer within sandbox \"e74d5b4b7d42354725da4d1dcae25f4605dd623007d919baa2aec21c5b82a90e\" for container &ContainerMetadata{Name:calico-typha,Attempt:0,}" Jan 20 23:53:24.433850 containerd[1670]: time="2026-01-20T23:53:24.433579010Z" level=info msg="Container 8bbbacbc2d85bac144984ca33c4dc954e331fc06707bc703675bc88ba1774797: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:53:24.442726 containerd[1670]: time="2026-01-20T23:53:24.442619796Z" level=info msg="CreateContainer within sandbox \"e74d5b4b7d42354725da4d1dcae25f4605dd623007d919baa2aec21c5b82a90e\" for &ContainerMetadata{Name:calico-typha,Attempt:0,} returns container id \"8bbbacbc2d85bac144984ca33c4dc954e331fc06707bc703675bc88ba1774797\"" Jan 20 23:53:24.443111 containerd[1670]: time="2026-01-20T23:53:24.443084317Z" level=info msg="StartContainer for \"8bbbacbc2d85bac144984ca33c4dc954e331fc06707bc703675bc88ba1774797\"" Jan 20 23:53:24.444252 containerd[1670]: time="2026-01-20T23:53:24.444225521Z" level=info msg="connecting to shim 8bbbacbc2d85bac144984ca33c4dc954e331fc06707bc703675bc88ba1774797" address="unix:///run/containerd/s/d8a55428b441b614144776d8b3d18ee2224c73e08d69d0105c7d0a9dc23e84df" protocol=ttrpc version=3 Jan 20 23:53:24.462837 systemd[1]: Started cri-containerd-8bbbacbc2d85bac144984ca33c4dc954e331fc06707bc703675bc88ba1774797.scope - libcontainer container 8bbbacbc2d85bac144984ca33c4dc954e331fc06707bc703675bc88ba1774797. Jan 20 23:53:24.473000 audit: BPF prog-id=161 op=LOAD Jan 20 23:53:24.474000 audit: BPF prog-id=162 op=LOAD Jan 20 23:53:24.474000 audit[3552]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=3376 pid=3552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:24.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862626261636263326438356261633134343938346361333363346463 Jan 20 23:53:24.474000 audit: BPF prog-id=162 op=UNLOAD Jan 20 23:53:24.474000 audit[3552]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3376 pid=3552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:24.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862626261636263326438356261633134343938346361333363346463 Jan 20 23:53:24.474000 audit: BPF prog-id=163 op=LOAD Jan 20 23:53:24.474000 audit[3552]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3376 pid=3552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:24.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862626261636263326438356261633134343938346361333363346463 Jan 20 23:53:24.474000 audit: BPF prog-id=164 op=LOAD Jan 20 23:53:24.474000 audit[3552]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3376 pid=3552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:24.474000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862626261636263326438356261633134343938346361333363346463 Jan 20 23:53:24.475000 audit: BPF prog-id=164 op=UNLOAD Jan 20 23:53:24.475000 audit[3552]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3376 pid=3552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:24.475000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862626261636263326438356261633134343938346361333363346463 Jan 20 23:53:24.475000 audit: BPF prog-id=163 op=UNLOAD Jan 20 23:53:24.475000 audit[3552]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3376 pid=3552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:24.475000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862626261636263326438356261633134343938346361333363346463 Jan 20 23:53:24.475000 audit: BPF prog-id=165 op=LOAD Jan 20 23:53:24.475000 audit[3552]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3376 pid=3552 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:24.475000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3862626261636263326438356261633134343938346361333363346463 Jan 20 23:53:24.501491 containerd[1670]: time="2026-01-20T23:53:24.501434724Z" level=info msg="StartContainer for \"8bbbacbc2d85bac144984ca33c4dc954e331fc06707bc703675bc88ba1774797\" returns successfully" Jan 20 23:53:25.218764 kubelet[2940]: I0120 23:53:25.218286 2940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-typha-9dbb5bc59-8vfds" podStartSLOduration=1.546029033 podStartE2EDuration="3.218270294s" podCreationTimestamp="2026-01-20 23:53:22 +0000 UTC" firstStartedPulling="2026-01-20 23:53:22.741334292 +0000 UTC m=+22.715453307" lastFinishedPulling="2026-01-20 23:53:24.413575513 +0000 UTC m=+24.387694568" observedRunningTime="2026-01-20 23:53:25.218017373 +0000 UTC m=+25.192136428" watchObservedRunningTime="2026-01-20 23:53:25.218270294 +0000 UTC m=+25.192389349" Jan 20 23:53:25.277581 kubelet[2940]: E0120 23:53:25.277494 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:25.277581 kubelet[2940]: W0120 23:53:25.277518 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:25.277581 kubelet[2940]: E0120 23:53:25.277537 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:25.278041 kubelet[2940]: E0120 23:53:25.277978 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:25.278041 kubelet[2940]: W0120 23:53:25.277992 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:25.278041 kubelet[2940]: E0120 23:53:25.278004 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:25.278309 kubelet[2940]: E0120 23:53:25.278297 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:25.278374 kubelet[2940]: W0120 23:53:25.278363 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:25.278434 kubelet[2940]: E0120 23:53:25.278424 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:25.278828 kubelet[2940]: E0120 23:53:25.278748 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:25.278828 kubelet[2940]: W0120 23:53:25.278763 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:25.278828 kubelet[2940]: E0120 23:53:25.278777 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:25.279155 kubelet[2940]: E0120 23:53:25.279139 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:25.279303 kubelet[2940]: W0120 23:53:25.279242 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:25.279303 kubelet[2940]: E0120 23:53:25.279260 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:25.279623 kubelet[2940]: E0120 23:53:25.279540 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:25.279623 kubelet[2940]: W0120 23:53:25.279553 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:25.279623 kubelet[2940]: E0120 23:53:25.279563 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:25.280472 kubelet[2940]: E0120 23:53:25.280419 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:25.280472 kubelet[2940]: W0120 23:53:25.280435 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:25.280472 kubelet[2940]: E0120 23:53:25.280446 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:25.281060 kubelet[2940]: E0120 23:53:25.280799 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:25.281060 kubelet[2940]: W0120 23:53:25.280810 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:25.281060 kubelet[2940]: E0120 23:53:25.280916 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:25.281399 kubelet[2940]: E0120 23:53:25.281337 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:25.281399 kubelet[2940]: W0120 23:53:25.281351 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:25.281399 kubelet[2940]: E0120 23:53:25.281363 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:25.282561 kubelet[2940]: E0120 23:53:25.282538 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:25.282888 kubelet[2940]: W0120 23:53:25.282726 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:25.282888 kubelet[2940]: E0120 23:53:25.282757 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:25.283121 kubelet[2940]: E0120 23:53:25.283107 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:25.283317 kubelet[2940]: W0120 23:53:25.283172 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:25.283317 kubelet[2940]: E0120 23:53:25.283188 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:25.283503 kubelet[2940]: E0120 23:53:25.283488 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:25.283563 kubelet[2940]: W0120 23:53:25.283552 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:25.283613 kubelet[2940]: E0120 23:53:25.283603 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:25.283849 kubelet[2940]: E0120 23:53:25.283835 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:25.283923 kubelet[2940]: W0120 23:53:25.283912 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:25.283984 kubelet[2940]: E0120 23:53:25.283974 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:25.284359 kubelet[2940]: E0120 23:53:25.284251 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:25.284359 kubelet[2940]: W0120 23:53:25.284263 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:25.284359 kubelet[2940]: E0120 23:53:25.284274 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:25.284609 kubelet[2940]: E0120 23:53:25.284528 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:25.284609 kubelet[2940]: W0120 23:53:25.284540 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:25.284609 kubelet[2940]: E0120 23:53:25.284551 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:25.285703 kubelet[2940]: E0120 23:53:25.285686 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:25.285754 kubelet[2940]: W0120 23:53:25.285704 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:25.285754 kubelet[2940]: E0120 23:53:25.285718 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:25.285911 kubelet[2940]: E0120 23:53:25.285898 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:25.285911 kubelet[2940]: W0120 23:53:25.285910 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:25.285975 kubelet[2940]: E0120 23:53:25.285920 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:25.286188 kubelet[2940]: E0120 23:53:25.286175 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:25.286222 kubelet[2940]: W0120 23:53:25.286188 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:25.286222 kubelet[2940]: E0120 23:53:25.286199 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:25.286389 kubelet[2940]: E0120 23:53:25.286378 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:25.286418 kubelet[2940]: W0120 23:53:25.286390 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:25.286418 kubelet[2940]: E0120 23:53:25.286399 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:25.286590 kubelet[2940]: E0120 23:53:25.286578 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:25.286590 kubelet[2940]: W0120 23:53:25.286590 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:25.286644 kubelet[2940]: E0120 23:53:25.286602 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:25.286751 kubelet[2940]: E0120 23:53:25.286740 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:25.286751 kubelet[2940]: W0120 23:53:25.286751 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:25.286816 kubelet[2940]: E0120 23:53:25.286760 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:25.286930 kubelet[2940]: E0120 23:53:25.286917 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:25.286930 kubelet[2940]: W0120 23:53:25.286929 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:25.286990 kubelet[2940]: E0120 23:53:25.286937 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:25.287229 kubelet[2940]: E0120 23:53:25.287213 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:25.287291 kubelet[2940]: W0120 23:53:25.287279 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:25.287355 kubelet[2940]: E0120 23:53:25.287343 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:25.287588 kubelet[2940]: E0120 23:53:25.287573 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:25.287787 kubelet[2940]: W0120 23:53:25.287659 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:25.287787 kubelet[2940]: E0120 23:53:25.287677 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:25.287922 kubelet[2940]: E0120 23:53:25.287910 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:25.287975 kubelet[2940]: W0120 23:53:25.287964 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:25.288035 kubelet[2940]: E0120 23:53:25.288024 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:25.288246 kubelet[2940]: E0120 23:53:25.288233 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:25.288436 kubelet[2940]: W0120 23:53:25.288311 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:25.288436 kubelet[2940]: E0120 23:53:25.288327 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:25.288584 kubelet[2940]: E0120 23:53:25.288571 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:25.288794 kubelet[2940]: W0120 23:53:25.288656 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:25.288794 kubelet[2940]: E0120 23:53:25.288673 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:25.288992 kubelet[2940]: E0120 23:53:25.288977 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:25.289054 kubelet[2940]: W0120 23:53:25.289043 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:25.289106 kubelet[2940]: E0120 23:53:25.289095 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:25.289480 kubelet[2940]: E0120 23:53:25.289441 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:25.289480 kubelet[2940]: W0120 23:53:25.289480 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:25.289574 kubelet[2940]: E0120 23:53:25.289494 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:25.289684 kubelet[2940]: E0120 23:53:25.289670 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:25.289684 kubelet[2940]: W0120 23:53:25.289681 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:25.289750 kubelet[2940]: E0120 23:53:25.289690 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:25.289863 kubelet[2940]: E0120 23:53:25.289852 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:25.289863 kubelet[2940]: W0120 23:53:25.289863 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:25.289909 kubelet[2940]: E0120 23:53:25.289872 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:25.290708 kubelet[2940]: E0120 23:53:25.290687 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:25.293532 kubelet[2940]: W0120 23:53:25.293497 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:25.293698 kubelet[2940]: E0120 23:53:25.293631 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:25.294305 kubelet[2940]: E0120 23:53:25.294279 2940 driver-call.go:262] Failed to unmarshal output for command: init, output: "", error: unexpected end of JSON input Jan 20 23:53:25.294357 kubelet[2940]: W0120 23:53:25.294298 2940 driver-call.go:149] FlexVolume: driver call failed: executable: /opt/libexec/kubernetes/kubelet-plugins/volume/exec/nodeagent~uds/uds, args: [init], error: executable file not found in $PATH, output: "" Jan 20 23:53:25.294357 kubelet[2940]: E0120 23:53:25.294331 2940 plugins.go:703] "Error dynamically probing plugins" err="error creating Flexvolume plugin from directory nodeagent~uds, skipping. Error: unexpected end of JSON input" Jan 20 23:53:25.678358 containerd[1670]: time="2026-01-20T23:53:25.678317449Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:53:25.680407 containerd[1670]: time="2026-01-20T23:53:25.680265614Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4: active requests=0, bytes read=0" Jan 20 23:53:25.681618 containerd[1670]: time="2026-01-20T23:53:25.681536338Z" level=info msg="ImageCreate event name:\"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:53:25.686119 containerd[1670]: time="2026-01-20T23:53:25.686071711Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:53:25.686905 containerd[1670]: time="2026-01-20T23:53:25.686873833Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" with image id \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\", repo tag \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/pod2daemon-flexvol@sha256:50bdfe370b7308fa9957ed1eaccd094aa4f27f9a4f1dfcfef2f8a7696a1551e1\", size \"5636392\" in 1.272741198s" Jan 20 23:53:25.687092 containerd[1670]: time="2026-01-20T23:53:25.686992954Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/pod2daemon-flexvol:v3.30.4\" returns image reference \"sha256:90ff755393144dc5a3c05f95ffe1a3ecd2f89b98ecf36d9e4721471b80af4640\"" Jan 20 23:53:25.691667 containerd[1670]: time="2026-01-20T23:53:25.691634127Z" level=info msg="CreateContainer within sandbox \"4027cab4aed8054d20e54623718e42115799d3ecd7d944db47db30c2ded34d1c\" for container &ContainerMetadata{Name:flexvol-driver,Attempt:0,}" Jan 20 23:53:25.709931 containerd[1670]: time="2026-01-20T23:53:25.708798416Z" level=info msg="Container ca02efe2d2ba9d740156d1f4fb15c6b5763069b0f2879690311a114e5b777aff: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:53:25.730273 containerd[1670]: time="2026-01-20T23:53:25.730234877Z" level=info msg="CreateContainer within sandbox \"4027cab4aed8054d20e54623718e42115799d3ecd7d944db47db30c2ded34d1c\" for &ContainerMetadata{Name:flexvol-driver,Attempt:0,} returns container id \"ca02efe2d2ba9d740156d1f4fb15c6b5763069b0f2879690311a114e5b777aff\"" Jan 20 23:53:25.731190 containerd[1670]: time="2026-01-20T23:53:25.731140840Z" level=info msg="StartContainer for \"ca02efe2d2ba9d740156d1f4fb15c6b5763069b0f2879690311a114e5b777aff\"" Jan 20 23:53:25.734209 containerd[1670]: time="2026-01-20T23:53:25.734118128Z" level=info msg="connecting to shim ca02efe2d2ba9d740156d1f4fb15c6b5763069b0f2879690311a114e5b777aff" address="unix:///run/containerd/s/f694d37f03e5a43773052019e2402347282d310dce1b9158b9ee44e3ecc4bc72" protocol=ttrpc version=3 Jan 20 23:53:25.756653 systemd[1]: Started cri-containerd-ca02efe2d2ba9d740156d1f4fb15c6b5763069b0f2879690311a114e5b777aff.scope - libcontainer container ca02efe2d2ba9d740156d1f4fb15c6b5763069b0f2879690311a114e5b777aff. Jan 20 23:53:25.793000 audit: BPF prog-id=166 op=LOAD Jan 20 23:53:25.794600 kernel: kauditd_printk_skb: 86 callbacks suppressed Jan 20 23:53:25.794667 kernel: audit: type=1334 audit(1768953205.793:561): prog-id=166 op=LOAD Jan 20 23:53:25.793000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3474 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:25.798441 kernel: audit: type=1300 audit(1768953205.793:561): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3474 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:25.798582 kernel: audit: type=1327 audit(1768953205.793:561): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361303265666532643262613964373430313536643166346662313563 Jan 20 23:53:25.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361303265666532643262613964373430313536643166346662313563 Jan 20 23:53:25.793000 audit: BPF prog-id=167 op=LOAD Jan 20 23:53:25.802688 kernel: audit: type=1334 audit(1768953205.793:562): prog-id=167 op=LOAD Jan 20 23:53:25.802739 kernel: audit: type=1300 audit(1768953205.793:562): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3474 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:25.793000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3474 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:25.793000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361303265666532643262613964373430313536643166346662313563 Jan 20 23:53:25.809605 kernel: audit: type=1327 audit(1768953205.793:562): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361303265666532643262613964373430313536643166346662313563 Jan 20 23:53:25.809664 kernel: audit: type=1334 audit(1768953205.794:563): prog-id=167 op=UNLOAD Jan 20 23:53:25.794000 audit: BPF prog-id=167 op=UNLOAD Jan 20 23:53:25.794000 audit[3627]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3474 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:25.813587 kernel: audit: type=1300 audit(1768953205.794:563): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3474 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:25.794000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361303265666532643262613964373430313536643166346662313563 Jan 20 23:53:25.817194 kernel: audit: type=1327 audit(1768953205.794:563): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361303265666532643262613964373430313536643166346662313563 Jan 20 23:53:25.817303 kernel: audit: type=1334 audit(1768953205.794:564): prog-id=166 op=UNLOAD Jan 20 23:53:25.794000 audit: BPF prog-id=166 op=UNLOAD Jan 20 23:53:25.794000 audit[3627]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3474 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:25.794000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361303265666532643262613964373430313536643166346662313563 Jan 20 23:53:25.794000 audit: BPF prog-id=168 op=LOAD Jan 20 23:53:25.794000 audit[3627]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3474 pid=3627 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:25.794000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6361303265666532643262613964373430313536643166346662313563 Jan 20 23:53:25.836997 containerd[1670]: time="2026-01-20T23:53:25.836957862Z" level=info msg="StartContainer for \"ca02efe2d2ba9d740156d1f4fb15c6b5763069b0f2879690311a114e5b777aff\" returns successfully" Jan 20 23:53:25.847800 systemd[1]: cri-containerd-ca02efe2d2ba9d740156d1f4fb15c6b5763069b0f2879690311a114e5b777aff.scope: Deactivated successfully. Jan 20 23:53:25.851287 containerd[1670]: time="2026-01-20T23:53:25.851244943Z" level=info msg="received container exit event container_id:\"ca02efe2d2ba9d740156d1f4fb15c6b5763069b0f2879690311a114e5b777aff\" id:\"ca02efe2d2ba9d740156d1f4fb15c6b5763069b0f2879690311a114e5b777aff\" pid:3640 exited_at:{seconds:1768953205 nanos:850880102}" Jan 20 23:53:25.853000 audit: BPF prog-id=168 op=UNLOAD Jan 20 23:53:25.873052 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-ca02efe2d2ba9d740156d1f4fb15c6b5763069b0f2879690311a114e5b777aff-rootfs.mount: Deactivated successfully. Jan 20 23:53:26.125578 kubelet[2940]: E0120 23:53:26.124956 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f8rjq" podUID="d9863653-2d98-4479-88c2-8614b7871a32" Jan 20 23:53:26.210213 kubelet[2940]: I0120 23:53:26.210148 2940 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 23:53:26.212211 containerd[1670]: time="2026-01-20T23:53:26.212174495Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\"" Jan 20 23:53:28.128360 kubelet[2940]: E0120 23:53:28.128317 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: cni plugin not initialized" pod="calico-system/csi-node-driver-f8rjq" podUID="d9863653-2d98-4479-88c2-8614b7871a32" Jan 20 23:53:28.345700 containerd[1670]: time="2026-01-20T23:53:28.345651995Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:53:28.346742 containerd[1670]: time="2026-01-20T23:53:28.346502277Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/cni:v3.30.4: active requests=0, bytes read=65921248" Jan 20 23:53:28.348162 containerd[1670]: time="2026-01-20T23:53:28.348123042Z" level=info msg="ImageCreate event name:\"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:53:28.350663 containerd[1670]: time="2026-01-20T23:53:28.350613209Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:53:28.351318 containerd[1670]: time="2026-01-20T23:53:28.351287931Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/cni:v3.30.4\" with image id \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\", repo tag \"ghcr.io/flatcar/calico/cni:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/cni@sha256:273501a9cfbd848ade2b6a8452dfafdd3adb4f9bf9aec45c398a5d19b8026627\", size \"67295507\" in 2.139073716s" Jan 20 23:53:28.351418 containerd[1670]: time="2026-01-20T23:53:28.351401811Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/cni:v3.30.4\" returns image reference \"sha256:e60d442b6496497355efdf45eaa3ea72f5a2b28a5187aeab33442933f3c735d2\"" Jan 20 23:53:28.357427 containerd[1670]: time="2026-01-20T23:53:28.357232628Z" level=info msg="CreateContainer within sandbox \"4027cab4aed8054d20e54623718e42115799d3ecd7d944db47db30c2ded34d1c\" for container &ContainerMetadata{Name:install-cni,Attempt:0,}" Jan 20 23:53:28.369955 containerd[1670]: time="2026-01-20T23:53:28.368810341Z" level=info msg="Container 3e8692d943269c3611587dfa0a889dbe80862e81b2a094af36306bb8f7b63f80: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:53:28.378499 containerd[1670]: time="2026-01-20T23:53:28.378393928Z" level=info msg="CreateContainer within sandbox \"4027cab4aed8054d20e54623718e42115799d3ecd7d944db47db30c2ded34d1c\" for &ContainerMetadata{Name:install-cni,Attempt:0,} returns container id \"3e8692d943269c3611587dfa0a889dbe80862e81b2a094af36306bb8f7b63f80\"" Jan 20 23:53:28.379707 containerd[1670]: time="2026-01-20T23:53:28.379679972Z" level=info msg="StartContainer for \"3e8692d943269c3611587dfa0a889dbe80862e81b2a094af36306bb8f7b63f80\"" Jan 20 23:53:28.381339 containerd[1670]: time="2026-01-20T23:53:28.381288136Z" level=info msg="connecting to shim 3e8692d943269c3611587dfa0a889dbe80862e81b2a094af36306bb8f7b63f80" address="unix:///run/containerd/s/f694d37f03e5a43773052019e2402347282d310dce1b9158b9ee44e3ecc4bc72" protocol=ttrpc version=3 Jan 20 23:53:28.404662 systemd[1]: Started cri-containerd-3e8692d943269c3611587dfa0a889dbe80862e81b2a094af36306bb8f7b63f80.scope - libcontainer container 3e8692d943269c3611587dfa0a889dbe80862e81b2a094af36306bb8f7b63f80. Jan 20 23:53:28.455000 audit: BPF prog-id=169 op=LOAD Jan 20 23:53:28.455000 audit[3689]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3474 pid=3689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:28.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365383639326439343332363963333631313538376466613061383839 Jan 20 23:53:28.455000 audit: BPF prog-id=170 op=LOAD Jan 20 23:53:28.455000 audit[3689]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3474 pid=3689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:28.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365383639326439343332363963333631313538376466613061383839 Jan 20 23:53:28.455000 audit: BPF prog-id=170 op=UNLOAD Jan 20 23:53:28.455000 audit[3689]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3474 pid=3689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:28.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365383639326439343332363963333631313538376466613061383839 Jan 20 23:53:28.455000 audit: BPF prog-id=169 op=UNLOAD Jan 20 23:53:28.455000 audit[3689]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3474 pid=3689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:28.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365383639326439343332363963333631313538376466613061383839 Jan 20 23:53:28.455000 audit: BPF prog-id=171 op=LOAD Jan 20 23:53:28.455000 audit[3689]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3474 pid=3689 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:28.455000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3365383639326439343332363963333631313538376466613061383839 Jan 20 23:53:28.476895 containerd[1670]: time="2026-01-20T23:53:28.476856050Z" level=info msg="StartContainer for \"3e8692d943269c3611587dfa0a889dbe80862e81b2a094af36306bb8f7b63f80\" returns successfully" Jan 20 23:53:28.857425 systemd[1]: cri-containerd-3e8692d943269c3611587dfa0a889dbe80862e81b2a094af36306bb8f7b63f80.scope: Deactivated successfully. Jan 20 23:53:28.857767 systemd[1]: cri-containerd-3e8692d943269c3611587dfa0a889dbe80862e81b2a094af36306bb8f7b63f80.scope: Consumed 448ms CPU time, 190.1M memory peak, 165.9M written to disk. Jan 20 23:53:28.860373 containerd[1670]: time="2026-01-20T23:53:28.860235626Z" level=info msg="received container exit event container_id:\"3e8692d943269c3611587dfa0a889dbe80862e81b2a094af36306bb8f7b63f80\" id:\"3e8692d943269c3611587dfa0a889dbe80862e81b2a094af36306bb8f7b63f80\" pid:3703 exited_at:{seconds:1768953208 nanos:859540984}" Jan 20 23:53:28.860591 kubelet[2940]: I0120 23:53:28.860448 2940 kubelet_node_status.go:501] "Fast updating node status as it just became ready" Jan 20 23:53:28.862000 audit: BPF prog-id=171 op=UNLOAD Jan 20 23:53:28.896066 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-3e8692d943269c3611587dfa0a889dbe80862e81b2a094af36306bb8f7b63f80-rootfs.mount: Deactivated successfully. Jan 20 23:53:28.914885 systemd[1]: Created slice kubepods-besteffort-poda2e09f63_e3b2_438b_a6ce_d2eefff60f3e.slice - libcontainer container kubepods-besteffort-poda2e09f63_e3b2_438b_a6ce_d2eefff60f3e.slice. Jan 20 23:53:28.921558 systemd[1]: Created slice kubepods-burstable-podaa5babee_7c4f_49df_8be7_6dbca8594e57.slice - libcontainer container kubepods-burstable-podaa5babee_7c4f_49df_8be7_6dbca8594e57.slice. Jan 20 23:53:28.942190 systemd[1]: Created slice kubepods-besteffort-pod0c4c02eb_8f55_425f_9205_33b16803b19e.slice - libcontainer container kubepods-besteffort-pod0c4c02eb_8f55_425f_9205_33b16803b19e.slice. Jan 20 23:53:28.966437 systemd[1]: Created slice kubepods-besteffort-podba92b217_c758_44c6_b97e_3beb84feb1eb.slice - libcontainer container kubepods-besteffort-podba92b217_c758_44c6_b97e_3beb84feb1eb.slice. Jan 20 23:53:28.978832 systemd[1]: Created slice kubepods-besteffort-pod7524b8f6_4e20_4bc6_8860_ffd104203deb.slice - libcontainer container kubepods-besteffort-pod7524b8f6_4e20_4bc6_8860_ffd104203deb.slice. Jan 20 23:53:28.984662 systemd[1]: Created slice kubepods-besteffort-pod1331ff2b_5e65_402f_8f27_39c9d0e0fbd5.slice - libcontainer container kubepods-besteffort-pod1331ff2b_5e65_402f_8f27_39c9d0e0fbd5.slice. Jan 20 23:53:28.989373 systemd[1]: Created slice kubepods-burstable-pod9ba157f6_cf6d_4c61_afb6_496bc0f9fb0e.slice - libcontainer container kubepods-burstable-pod9ba157f6_cf6d_4c61_afb6_496bc0f9fb0e.slice. Jan 20 23:53:29.011530 kubelet[2940]: I0120 23:53:29.011419 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-k452c\" (UniqueName: \"kubernetes.io/projected/aa5babee-7c4f-49df-8be7-6dbca8594e57-kube-api-access-k452c\") pod \"coredns-674b8bbfcf-t9hcb\" (UID: \"aa5babee-7c4f-49df-8be7-6dbca8594e57\") " pod="kube-system/coredns-674b8bbfcf-t9hcb" Jan 20 23:53:29.011530 kubelet[2940]: I0120 23:53:29.011486 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7dzwt\" (UniqueName: \"kubernetes.io/projected/a2e09f63-e3b2-438b-a6ce-d2eefff60f3e-kube-api-access-7dzwt\") pod \"calico-kube-controllers-596dccccb4-mz7rs\" (UID: \"a2e09f63-e3b2-438b-a6ce-d2eefff60f3e\") " pod="calico-system/calico-kube-controllers-596dccccb4-mz7rs" Jan 20 23:53:29.011827 kubelet[2940]: I0120 23:53:29.011699 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/aa5babee-7c4f-49df-8be7-6dbca8594e57-config-volume\") pod \"coredns-674b8bbfcf-t9hcb\" (UID: \"aa5babee-7c4f-49df-8be7-6dbca8594e57\") " pod="kube-system/coredns-674b8bbfcf-t9hcb" Jan 20 23:53:29.011827 kubelet[2940]: I0120 23:53:29.011730 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tigera-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/a2e09f63-e3b2-438b-a6ce-d2eefff60f3e-tigera-ca-bundle\") pod \"calico-kube-controllers-596dccccb4-mz7rs\" (UID: \"a2e09f63-e3b2-438b-a6ce-d2eefff60f3e\") " pod="calico-system/calico-kube-controllers-596dccccb4-mz7rs" Jan 20 23:53:29.113055 kubelet[2940]: I0120 23:53:29.112934 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qpkwz\" (UniqueName: \"kubernetes.io/projected/9ba157f6-cf6d-4c61-afb6-496bc0f9fb0e-kube-api-access-qpkwz\") pod \"coredns-674b8bbfcf-tzdxw\" (UID: \"9ba157f6-cf6d-4c61-afb6-496bc0f9fb0e\") " pod="kube-system/coredns-674b8bbfcf-tzdxw" Jan 20 23:53:29.113055 kubelet[2940]: I0120 23:53:29.113008 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/7524b8f6-4e20-4bc6-8860-ffd104203deb-goldmane-ca-bundle\") pod \"goldmane-666569f655-27t6c\" (UID: \"7524b8f6-4e20-4bc6-8860-ffd104203deb\") " pod="calico-system/goldmane-666569f655-27t6c" Jan 20 23:53:29.113055 kubelet[2940]: I0120 23:53:29.113031 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9ba157f6-cf6d-4c61-afb6-496bc0f9fb0e-config-volume\") pod \"coredns-674b8bbfcf-tzdxw\" (UID: \"9ba157f6-cf6d-4c61-afb6-496bc0f9fb0e\") " pod="kube-system/coredns-674b8bbfcf-tzdxw" Jan 20 23:53:29.113055 kubelet[2940]: I0120 23:53:29.113053 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"goldmane-key-pair\" (UniqueName: \"kubernetes.io/secret/7524b8f6-4e20-4bc6-8860-ffd104203deb-goldmane-key-pair\") pod \"goldmane-666569f655-27t6c\" (UID: \"7524b8f6-4e20-4bc6-8860-ffd104203deb\") " pod="calico-system/goldmane-666569f655-27t6c" Jan 20 23:53:29.113243 kubelet[2940]: I0120 23:53:29.113098 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1331ff2b-5e65-402f-8f27-39c9d0e0fbd5-whisker-backend-key-pair\") pod \"whisker-68795c5fff-tjsh5\" (UID: \"1331ff2b-5e65-402f-8f27-39c9d0e0fbd5\") " pod="calico-system/whisker-68795c5fff-tjsh5" Jan 20 23:53:29.113243 kubelet[2940]: I0120 23:53:29.113115 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1331ff2b-5e65-402f-8f27-39c9d0e0fbd5-whisker-ca-bundle\") pod \"whisker-68795c5fff-tjsh5\" (UID: \"1331ff2b-5e65-402f-8f27-39c9d0e0fbd5\") " pod="calico-system/whisker-68795c5fff-tjsh5" Jan 20 23:53:29.113243 kubelet[2940]: I0120 23:53:29.113131 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9wg2h\" (UniqueName: \"kubernetes.io/projected/1331ff2b-5e65-402f-8f27-39c9d0e0fbd5-kube-api-access-9wg2h\") pod \"whisker-68795c5fff-tjsh5\" (UID: \"1331ff2b-5e65-402f-8f27-39c9d0e0fbd5\") " pod="calico-system/whisker-68795c5fff-tjsh5" Jan 20 23:53:29.113243 kubelet[2940]: I0120 23:53:29.113155 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/0c4c02eb-8f55-425f-9205-33b16803b19e-calico-apiserver-certs\") pod \"calico-apiserver-767c66c85d-nfdlb\" (UID: \"0c4c02eb-8f55-425f-9205-33b16803b19e\") " pod="calico-apiserver/calico-apiserver-767c66c85d-nfdlb" Jan 20 23:53:29.113243 kubelet[2940]: I0120 23:53:29.113175 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n26q6\" (UniqueName: \"kubernetes.io/projected/0c4c02eb-8f55-425f-9205-33b16803b19e-kube-api-access-n26q6\") pod \"calico-apiserver-767c66c85d-nfdlb\" (UID: \"0c4c02eb-8f55-425f-9205-33b16803b19e\") " pod="calico-apiserver/calico-apiserver-767c66c85d-nfdlb" Jan 20 23:53:29.113359 kubelet[2940]: I0120 23:53:29.113227 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"calico-apiserver-certs\" (UniqueName: \"kubernetes.io/secret/ba92b217-c758-44c6-b97e-3beb84feb1eb-calico-apiserver-certs\") pod \"calico-apiserver-767c66c85d-czqtd\" (UID: \"ba92b217-c758-44c6-b97e-3beb84feb1eb\") " pod="calico-apiserver/calico-apiserver-767c66c85d-czqtd" Jan 20 23:53:29.113359 kubelet[2940]: I0120 23:53:29.113244 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ggz5c\" (UniqueName: \"kubernetes.io/projected/ba92b217-c758-44c6-b97e-3beb84feb1eb-kube-api-access-ggz5c\") pod \"calico-apiserver-767c66c85d-czqtd\" (UID: \"ba92b217-c758-44c6-b97e-3beb84feb1eb\") " pod="calico-apiserver/calico-apiserver-767c66c85d-czqtd" Jan 20 23:53:29.113359 kubelet[2940]: I0120 23:53:29.113262 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7524b8f6-4e20-4bc6-8860-ffd104203deb-config\") pod \"goldmane-666569f655-27t6c\" (UID: \"7524b8f6-4e20-4bc6-8860-ffd104203deb\") " pod="calico-system/goldmane-666569f655-27t6c" Jan 20 23:53:29.113359 kubelet[2940]: I0120 23:53:29.113277 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m7rtl\" (UniqueName: \"kubernetes.io/projected/7524b8f6-4e20-4bc6-8860-ffd104203deb-kube-api-access-m7rtl\") pod \"goldmane-666569f655-27t6c\" (UID: \"7524b8f6-4e20-4bc6-8860-ffd104203deb\") " pod="calico-system/goldmane-666569f655-27t6c" Jan 20 23:53:29.230221 containerd[1670]: time="2026-01-20T23:53:29.229483961Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596dccccb4-mz7rs,Uid:a2e09f63-e3b2-438b-a6ce-d2eefff60f3e,Namespace:calico-system,Attempt:0,}" Jan 20 23:53:29.230609 containerd[1670]: time="2026-01-20T23:53:29.230539684Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t9hcb,Uid:aa5babee-7c4f-49df-8be7-6dbca8594e57,Namespace:kube-system,Attempt:0,}" Jan 20 23:53:29.234967 containerd[1670]: time="2026-01-20T23:53:29.234868297Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\"" Jan 20 23:53:29.259650 containerd[1670]: time="2026-01-20T23:53:29.259605167Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-767c66c85d-nfdlb,Uid:0c4c02eb-8f55-425f-9205-33b16803b19e,Namespace:calico-apiserver,Attempt:0,}" Jan 20 23:53:29.274190 containerd[1670]: time="2026-01-20T23:53:29.274156049Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-767c66c85d-czqtd,Uid:ba92b217-c758-44c6-b97e-3beb84feb1eb,Namespace:calico-apiserver,Attempt:0,}" Jan 20 23:53:29.283807 containerd[1670]: time="2026-01-20T23:53:29.283757557Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-27t6c,Uid:7524b8f6-4e20-4bc6-8860-ffd104203deb,Namespace:calico-system,Attempt:0,}" Jan 20 23:53:29.288995 containerd[1670]: time="2026-01-20T23:53:29.288947571Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68795c5fff-tjsh5,Uid:1331ff2b-5e65-402f-8f27-39c9d0e0fbd5,Namespace:calico-system,Attempt:0,}" Jan 20 23:53:29.293113 containerd[1670]: time="2026-01-20T23:53:29.293073183Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tzdxw,Uid:9ba157f6-cf6d-4c61-afb6-496bc0f9fb0e,Namespace:kube-system,Attempt:0,}" Jan 20 23:53:29.318888 containerd[1670]: time="2026-01-20T23:53:29.318777297Z" level=error msg="Failed to destroy network for sandbox \"2d2bda10bde800cc65cd1845d567038a48f30c94f65e7655b5594721f075bb85\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:53:29.326195 containerd[1670]: time="2026-01-20T23:53:29.326119198Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t9hcb,Uid:aa5babee-7c4f-49df-8be7-6dbca8594e57,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d2bda10bde800cc65cd1845d567038a48f30c94f65e7655b5594721f075bb85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:53:29.326620 kubelet[2940]: E0120 23:53:29.326372 2940 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d2bda10bde800cc65cd1845d567038a48f30c94f65e7655b5594721f075bb85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:53:29.326620 kubelet[2940]: E0120 23:53:29.326438 2940 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d2bda10bde800cc65cd1845d567038a48f30c94f65e7655b5594721f075bb85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-t9hcb" Jan 20 23:53:29.326620 kubelet[2940]: E0120 23:53:29.326467 2940 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2d2bda10bde800cc65cd1845d567038a48f30c94f65e7655b5594721f075bb85\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-t9hcb" Jan 20 23:53:29.327201 kubelet[2940]: E0120 23:53:29.327152 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-t9hcb_kube-system(aa5babee-7c4f-49df-8be7-6dbca8594e57)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-t9hcb_kube-system(aa5babee-7c4f-49df-8be7-6dbca8594e57)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2d2bda10bde800cc65cd1845d567038a48f30c94f65e7655b5594721f075bb85\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-t9hcb" podUID="aa5babee-7c4f-49df-8be7-6dbca8594e57" Jan 20 23:53:29.331764 containerd[1670]: time="2026-01-20T23:53:29.331714654Z" level=error msg="Failed to destroy network for sandbox \"1d0a74d3d83d46d4fc68941885c538e943bb208e69735323f5024ab48a1c3951\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:53:29.338800 containerd[1670]: time="2026-01-20T23:53:29.338741274Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596dccccb4-mz7rs,Uid:a2e09f63-e3b2-438b-a6ce-d2eefff60f3e,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d0a74d3d83d46d4fc68941885c538e943bb208e69735323f5024ab48a1c3951\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:53:29.339011 kubelet[2940]: E0120 23:53:29.338972 2940 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d0a74d3d83d46d4fc68941885c538e943bb208e69735323f5024ab48a1c3951\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:53:29.339074 kubelet[2940]: E0120 23:53:29.339032 2940 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d0a74d3d83d46d4fc68941885c538e943bb208e69735323f5024ab48a1c3951\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-596dccccb4-mz7rs" Jan 20 23:53:29.339074 kubelet[2940]: E0120 23:53:29.339053 2940 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1d0a74d3d83d46d4fc68941885c538e943bb208e69735323f5024ab48a1c3951\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/calico-kube-controllers-596dccccb4-mz7rs" Jan 20 23:53:29.339131 kubelet[2940]: E0120 23:53:29.339102 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-kube-controllers-596dccccb4-mz7rs_calico-system(a2e09f63-e3b2-438b-a6ce-d2eefff60f3e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-kube-controllers-596dccccb4-mz7rs_calico-system(a2e09f63-e3b2-438b-a6ce-d2eefff60f3e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1d0a74d3d83d46d4fc68941885c538e943bb208e69735323f5024ab48a1c3951\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/calico-kube-controllers-596dccccb4-mz7rs" podUID="a2e09f63-e3b2-438b-a6ce-d2eefff60f3e" Jan 20 23:53:29.354716 containerd[1670]: time="2026-01-20T23:53:29.354673839Z" level=error msg="Failed to destroy network for sandbox \"c64908a5123b8805834b605f93f5bd9dab7d911e11cf7b9d0cc668e247f9c993\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:53:29.360511 containerd[1670]: time="2026-01-20T23:53:29.360444136Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-767c66c85d-nfdlb,Uid:0c4c02eb-8f55-425f-9205-33b16803b19e,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"c64908a5123b8805834b605f93f5bd9dab7d911e11cf7b9d0cc668e247f9c993\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:53:29.360899 kubelet[2940]: E0120 23:53:29.360867 2940 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c64908a5123b8805834b605f93f5bd9dab7d911e11cf7b9d0cc668e247f9c993\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:53:29.361028 kubelet[2940]: E0120 23:53:29.361010 2940 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c64908a5123b8805834b605f93f5bd9dab7d911e11cf7b9d0cc668e247f9c993\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-767c66c85d-nfdlb" Jan 20 23:53:29.361098 kubelet[2940]: E0120 23:53:29.361082 2940 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"c64908a5123b8805834b605f93f5bd9dab7d911e11cf7b9d0cc668e247f9c993\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-767c66c85d-nfdlb" Jan 20 23:53:29.361207 kubelet[2940]: E0120 23:53:29.361186 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-767c66c85d-nfdlb_calico-apiserver(0c4c02eb-8f55-425f-9205-33b16803b19e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-767c66c85d-nfdlb_calico-apiserver(0c4c02eb-8f55-425f-9205-33b16803b19e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"c64908a5123b8805834b605f93f5bd9dab7d911e11cf7b9d0cc668e247f9c993\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-767c66c85d-nfdlb" podUID="0c4c02eb-8f55-425f-9205-33b16803b19e" Jan 20 23:53:29.367006 containerd[1670]: time="2026-01-20T23:53:29.366726834Z" level=error msg="Failed to destroy network for sandbox \"1e40e179dcaf32a496f5434fc0775fbf008d9766b6ee413b48c9cc963ea84711\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:53:29.377441 containerd[1670]: time="2026-01-20T23:53:29.377379544Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-767c66c85d-czqtd,Uid:ba92b217-c758-44c6-b97e-3beb84feb1eb,Namespace:calico-apiserver,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e40e179dcaf32a496f5434fc0775fbf008d9766b6ee413b48c9cc963ea84711\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:53:29.377778 kubelet[2940]: E0120 23:53:29.377633 2940 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e40e179dcaf32a496f5434fc0775fbf008d9766b6ee413b48c9cc963ea84711\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:53:29.377778 kubelet[2940]: E0120 23:53:29.377688 2940 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e40e179dcaf32a496f5434fc0775fbf008d9766b6ee413b48c9cc963ea84711\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-767c66c85d-czqtd" Jan 20 23:53:29.377778 kubelet[2940]: E0120 23:53:29.377712 2940 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"1e40e179dcaf32a496f5434fc0775fbf008d9766b6ee413b48c9cc963ea84711\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-apiserver/calico-apiserver-767c66c85d-czqtd" Jan 20 23:53:29.379488 kubelet[2940]: E0120 23:53:29.378572 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"calico-apiserver-767c66c85d-czqtd_calico-apiserver(ba92b217-c758-44c6-b97e-3beb84feb1eb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"calico-apiserver-767c66c85d-czqtd_calico-apiserver(ba92b217-c758-44c6-b97e-3beb84feb1eb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"1e40e179dcaf32a496f5434fc0775fbf008d9766b6ee413b48c9cc963ea84711\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-apiserver/calico-apiserver-767c66c85d-czqtd" podUID="ba92b217-c758-44c6-b97e-3beb84feb1eb" Jan 20 23:53:29.392021 containerd[1670]: time="2026-01-20T23:53:29.391965666Z" level=error msg="Failed to destroy network for sandbox \"2c0b094f2718e663fc75d354b426a801fea2c1bdf2fcdacaf943e0ac71a1f5af\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:53:29.393727 systemd[1]: run-netns-cni\x2dc82656c8\x2d66e6\x2dc2be\x2d7dab\x2d88ea62f40c11.mount: Deactivated successfully. Jan 20 23:53:29.396555 containerd[1670]: time="2026-01-20T23:53:29.396473359Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-27t6c,Uid:7524b8f6-4e20-4bc6-8860-ffd104203deb,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c0b094f2718e663fc75d354b426a801fea2c1bdf2fcdacaf943e0ac71a1f5af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:53:29.396680 containerd[1670]: time="2026-01-20T23:53:29.396582319Z" level=error msg="Failed to destroy network for sandbox \"73d36871c4d04e2a8f45a69febd9c6f632e6a413008209c7c2a775c11e432e4c\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:53:29.396804 kubelet[2940]: E0120 23:53:29.396765 2940 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c0b094f2718e663fc75d354b426a801fea2c1bdf2fcdacaf943e0ac71a1f5af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:53:29.396911 kubelet[2940]: E0120 23:53:29.396826 2940 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c0b094f2718e663fc75d354b426a801fea2c1bdf2fcdacaf943e0ac71a1f5af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-27t6c" Jan 20 23:53:29.396949 kubelet[2940]: E0120 23:53:29.396916 2940 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"2c0b094f2718e663fc75d354b426a801fea2c1bdf2fcdacaf943e0ac71a1f5af\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/goldmane-666569f655-27t6c" Jan 20 23:53:29.397031 kubelet[2940]: E0120 23:53:29.397003 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"goldmane-666569f655-27t6c_calico-system(7524b8f6-4e20-4bc6-8860-ffd104203deb)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"goldmane-666569f655-27t6c_calico-system(7524b8f6-4e20-4bc6-8860-ffd104203deb)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"2c0b094f2718e663fc75d354b426a801fea2c1bdf2fcdacaf943e0ac71a1f5af\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/goldmane-666569f655-27t6c" podUID="7524b8f6-4e20-4bc6-8860-ffd104203deb" Jan 20 23:53:29.398317 systemd[1]: run-netns-cni\x2d204b2260\x2d0ca3\x2dbf84\x2d19a6\x2dd7d25e5f1dd8.mount: Deactivated successfully. Jan 20 23:53:29.401384 containerd[1670]: time="2026-01-20T23:53:29.401321373Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-68795c5fff-tjsh5,Uid:1331ff2b-5e65-402f-8f27-39c9d0e0fbd5,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"73d36871c4d04e2a8f45a69febd9c6f632e6a413008209c7c2a775c11e432e4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:53:29.401622 kubelet[2940]: E0120 23:53:29.401583 2940 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73d36871c4d04e2a8f45a69febd9c6f632e6a413008209c7c2a775c11e432e4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:53:29.401694 kubelet[2940]: E0120 23:53:29.401645 2940 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73d36871c4d04e2a8f45a69febd9c6f632e6a413008209c7c2a775c11e432e4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-68795c5fff-tjsh5" Jan 20 23:53:29.401694 kubelet[2940]: E0120 23:53:29.401667 2940 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"73d36871c4d04e2a8f45a69febd9c6f632e6a413008209c7c2a775c11e432e4c\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/whisker-68795c5fff-tjsh5" Jan 20 23:53:29.401743 kubelet[2940]: E0120 23:53:29.401725 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"whisker-68795c5fff-tjsh5_calico-system(1331ff2b-5e65-402f-8f27-39c9d0e0fbd5)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"whisker-68795c5fff-tjsh5_calico-system(1331ff2b-5e65-402f-8f27-39c9d0e0fbd5)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"73d36871c4d04e2a8f45a69febd9c6f632e6a413008209c7c2a775c11e432e4c\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/whisker-68795c5fff-tjsh5" podUID="1331ff2b-5e65-402f-8f27-39c9d0e0fbd5" Jan 20 23:53:29.407968 containerd[1670]: time="2026-01-20T23:53:29.407921191Z" level=error msg="Failed to destroy network for sandbox \"dd7bbbfce632bea64f1ccfaaf9121fb328bffe7a1e45ba9dd0318de4d1a5cfec\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:53:29.409715 systemd[1]: run-netns-cni\x2d9dc41bcc\x2dac58\x2d76a2\x2dddca\x2d30cdbd1d8492.mount: Deactivated successfully. Jan 20 23:53:29.412409 containerd[1670]: time="2026-01-20T23:53:29.412366804Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tzdxw,Uid:9ba157f6-cf6d-4c61-afb6-496bc0f9fb0e,Namespace:kube-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd7bbbfce632bea64f1ccfaaf9121fb328bffe7a1e45ba9dd0318de4d1a5cfec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:53:29.412615 kubelet[2940]: E0120 23:53:29.412579 2940 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd7bbbfce632bea64f1ccfaaf9121fb328bffe7a1e45ba9dd0318de4d1a5cfec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:53:29.412677 kubelet[2940]: E0120 23:53:29.412646 2940 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd7bbbfce632bea64f1ccfaaf9121fb328bffe7a1e45ba9dd0318de4d1a5cfec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-tzdxw" Jan 20 23:53:29.412677 kubelet[2940]: E0120 23:53:29.412667 2940 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"dd7bbbfce632bea64f1ccfaaf9121fb328bffe7a1e45ba9dd0318de4d1a5cfec\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="kube-system/coredns-674b8bbfcf-tzdxw" Jan 20 23:53:29.412744 kubelet[2940]: E0120 23:53:29.412712 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"coredns-674b8bbfcf-tzdxw_kube-system(9ba157f6-cf6d-4c61-afb6-496bc0f9fb0e)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"coredns-674b8bbfcf-tzdxw_kube-system(9ba157f6-cf6d-4c61-afb6-496bc0f9fb0e)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"dd7bbbfce632bea64f1ccfaaf9121fb328bffe7a1e45ba9dd0318de4d1a5cfec\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="kube-system/coredns-674b8bbfcf-tzdxw" podUID="9ba157f6-cf6d-4c61-afb6-496bc0f9fb0e" Jan 20 23:53:30.132875 systemd[1]: Created slice kubepods-besteffort-podd9863653_2d98_4479_88c2_8614b7871a32.slice - libcontainer container kubepods-besteffort-podd9863653_2d98_4479_88c2_8614b7871a32.slice. Jan 20 23:53:30.134753 containerd[1670]: time="2026-01-20T23:53:30.134711789Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f8rjq,Uid:d9863653-2d98-4479-88c2-8614b7871a32,Namespace:calico-system,Attempt:0,}" Jan 20 23:53:30.184291 containerd[1670]: time="2026-01-20T23:53:30.184250651Z" level=error msg="Failed to destroy network for sandbox \"44170ad9d6ba5a661b611d0167f93894d4f11b827965f5f6a3e7d733ca2f65b9\"" error="plugin type=\"calico\" failed (delete): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:53:30.187840 containerd[1670]: time="2026-01-20T23:53:30.187620381Z" level=error msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f8rjq,Uid:d9863653-2d98-4479-88c2-8614b7871a32,Namespace:calico-system,Attempt:0,} failed, error" error="rpc error: code = Unknown desc = failed to setup network for sandbox \"44170ad9d6ba5a661b611d0167f93894d4f11b827965f5f6a3e7d733ca2f65b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:53:30.187995 kubelet[2940]: E0120 23:53:30.187824 2940 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44170ad9d6ba5a661b611d0167f93894d4f11b827965f5f6a3e7d733ca2f65b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" Jan 20 23:53:30.187995 kubelet[2940]: E0120 23:53:30.187886 2940 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44170ad9d6ba5a661b611d0167f93894d4f11b827965f5f6a3e7d733ca2f65b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f8rjq" Jan 20 23:53:30.187995 kubelet[2940]: E0120 23:53:30.187906 2940 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to setup network for sandbox \"44170ad9d6ba5a661b611d0167f93894d4f11b827965f5f6a3e7d733ca2f65b9\": plugin type=\"calico\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/" pod="calico-system/csi-node-driver-f8rjq" Jan 20 23:53:30.188119 kubelet[2940]: E0120 23:53:30.187947 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"csi-node-driver-f8rjq_calico-system(d9863653-2d98-4479-88c2-8614b7871a32)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"csi-node-driver-f8rjq_calico-system(d9863653-2d98-4479-88c2-8614b7871a32)\\\": rpc error: code = Unknown desc = failed to setup network for sandbox \\\"44170ad9d6ba5a661b611d0167f93894d4f11b827965f5f6a3e7d733ca2f65b9\\\": plugin type=\\\"calico\\\" failed (add): stat /var/lib/calico/nodename: no such file or directory: check that the calico/node container is running and has mounted /var/lib/calico/\"" pod="calico-system/csi-node-driver-f8rjq" podUID="d9863653-2d98-4479-88c2-8614b7871a32" Jan 20 23:53:30.370623 systemd[1]: run-netns-cni\x2df51c13a5\x2d98d7\x2dfab3\x2da6d5\x2d793e2426231e.mount: Deactivated successfully. Jan 20 23:53:32.777382 systemd[1]: var-lib-containerd-tmpmounts-containerd\x2dmount4292430639.mount: Deactivated successfully. Jan 20 23:53:32.800479 containerd[1670]: time="2026-01-20T23:53:32.800400930Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node:v3.30.4\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:53:32.802525 containerd[1670]: time="2026-01-20T23:53:32.802465416Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node:v3.30.4: active requests=0, bytes read=150930912" Jan 20 23:53:32.804687 containerd[1670]: time="2026-01-20T23:53:32.804581662Z" level=info msg="ImageCreate event name:\"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:53:32.806822 containerd[1670]: time="2026-01-20T23:53:32.806784548Z" level=info msg="ImageCreate event name:\"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\" labels:{key:\"io.cri-containerd.image\" value:\"managed\"}" Jan 20 23:53:32.807860 containerd[1670]: time="2026-01-20T23:53:32.807746831Z" level=info msg="Pulled image \"ghcr.io/flatcar/calico/node:v3.30.4\" with image id \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\", repo tag \"ghcr.io/flatcar/calico/node:v3.30.4\", repo digest \"ghcr.io/flatcar/calico/node@sha256:e92cca333202c87d07bf57f38182fd68f0779f912ef55305eda1fccc9f33667c\", size \"150934424\" in 3.572848134s" Jan 20 23:53:32.807860 containerd[1670]: time="2026-01-20T23:53:32.807777751Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node:v3.30.4\" returns image reference \"sha256:43a5290057a103af76996c108856f92ed902f34573d7a864f55f15b8aaf4683b\"" Jan 20 23:53:32.822336 containerd[1670]: time="2026-01-20T23:53:32.822283353Z" level=info msg="CreateContainer within sandbox \"4027cab4aed8054d20e54623718e42115799d3ecd7d944db47db30c2ded34d1c\" for container &ContainerMetadata{Name:calico-node,Attempt:0,}" Jan 20 23:53:32.832884 containerd[1670]: time="2026-01-20T23:53:32.832850183Z" level=info msg="Container 02089b1c99c664605e406bb73a2bc67752e38158865202fd8814ca9bb167c830: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:53:32.845245 containerd[1670]: time="2026-01-20T23:53:32.845156538Z" level=info msg="CreateContainer within sandbox \"4027cab4aed8054d20e54623718e42115799d3ecd7d944db47db30c2ded34d1c\" for &ContainerMetadata{Name:calico-node,Attempt:0,} returns container id \"02089b1c99c664605e406bb73a2bc67752e38158865202fd8814ca9bb167c830\"" Jan 20 23:53:32.845836 containerd[1670]: time="2026-01-20T23:53:32.845802020Z" level=info msg="StartContainer for \"02089b1c99c664605e406bb73a2bc67752e38158865202fd8814ca9bb167c830\"" Jan 20 23:53:32.847533 containerd[1670]: time="2026-01-20T23:53:32.847503225Z" level=info msg="connecting to shim 02089b1c99c664605e406bb73a2bc67752e38158865202fd8814ca9bb167c830" address="unix:///run/containerd/s/f694d37f03e5a43773052019e2402347282d310dce1b9158b9ee44e3ecc4bc72" protocol=ttrpc version=3 Jan 20 23:53:32.870783 systemd[1]: Started cri-containerd-02089b1c99c664605e406bb73a2bc67752e38158865202fd8814ca9bb167c830.scope - libcontainer container 02089b1c99c664605e406bb73a2bc67752e38158865202fd8814ca9bb167c830. Jan 20 23:53:32.935000 audit: BPF prog-id=172 op=LOAD Jan 20 23:53:32.937916 kernel: kauditd_printk_skb: 22 callbacks suppressed Jan 20 23:53:32.937975 kernel: audit: type=1334 audit(1768953212.935:573): prog-id=172 op=LOAD Jan 20 23:53:32.935000 audit[4009]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3474 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:32.941192 kernel: audit: type=1300 audit(1768953212.935:573): arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=3474 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:32.941231 kernel: audit: type=1327 audit(1768953212.935:573): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032303839623163393963363634363035653430366262373361326263 Jan 20 23:53:32.935000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032303839623163393963363634363035653430366262373361326263 Jan 20 23:53:32.936000 audit: BPF prog-id=173 op=LOAD Jan 20 23:53:32.944997 kernel: audit: type=1334 audit(1768953212.936:574): prog-id=173 op=LOAD Jan 20 23:53:32.945038 kernel: audit: type=1300 audit(1768953212.936:574): arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3474 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:32.936000 audit[4009]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=3474 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:32.948122 kernel: audit: type=1327 audit(1768953212.936:574): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032303839623163393963363634363035653430366262373361326263 Jan 20 23:53:32.936000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032303839623163393963363634363035653430366262373361326263 Jan 20 23:53:32.937000 audit: BPF prog-id=173 op=UNLOAD Jan 20 23:53:32.952527 kernel: audit: type=1334 audit(1768953212.937:575): prog-id=173 op=UNLOAD Jan 20 23:53:32.952607 kernel: audit: type=1300 audit(1768953212.937:575): arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3474 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:32.937000 audit[4009]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=3474 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:32.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032303839623163393963363634363035653430366262373361326263 Jan 20 23:53:32.958885 kernel: audit: type=1327 audit(1768953212.937:575): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032303839623163393963363634363035653430366262373361326263 Jan 20 23:53:32.959272 kernel: audit: type=1334 audit(1768953212.937:576): prog-id=172 op=UNLOAD Jan 20 23:53:32.937000 audit: BPF prog-id=172 op=UNLOAD Jan 20 23:53:32.937000 audit[4009]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=3474 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:32.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032303839623163393963363634363035653430366262373361326263 Jan 20 23:53:32.937000 audit: BPF prog-id=174 op=LOAD Jan 20 23:53:32.937000 audit[4009]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=3474 pid=4009 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:32.937000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3032303839623163393963363634363035653430366262373361326263 Jan 20 23:53:32.974844 containerd[1670]: time="2026-01-20T23:53:32.974794269Z" level=info msg="StartContainer for \"02089b1c99c664605e406bb73a2bc67752e38158865202fd8814ca9bb167c830\" returns successfully" Jan 20 23:53:33.108136 kernel: wireguard: WireGuard 1.0.0 loaded. See www.wireguard.com for information. Jan 20 23:53:33.108259 kernel: wireguard: Copyright (C) 2015-2019 Jason A. Donenfeld . All Rights Reserved. Jan 20 23:53:33.271143 kubelet[2940]: I0120 23:53:33.270864 2940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="calico-system/calico-node-g5whf" podStartSLOduration=1.424757286 podStartE2EDuration="11.270845235s" podCreationTimestamp="2026-01-20 23:53:22 +0000 UTC" firstStartedPulling="2026-01-20 23:53:22.962282884 +0000 UTC m=+22.936401939" lastFinishedPulling="2026-01-20 23:53:32.808370833 +0000 UTC m=+32.782489888" observedRunningTime="2026-01-20 23:53:33.270542434 +0000 UTC m=+33.244661489" watchObservedRunningTime="2026-01-20 23:53:33.270845235 +0000 UTC m=+33.244964290" Jan 20 23:53:33.339853 kubelet[2940]: I0120 23:53:33.339727 2940 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9wg2h\" (UniqueName: \"kubernetes.io/projected/1331ff2b-5e65-402f-8f27-39c9d0e0fbd5-kube-api-access-9wg2h\") pod \"1331ff2b-5e65-402f-8f27-39c9d0e0fbd5\" (UID: \"1331ff2b-5e65-402f-8f27-39c9d0e0fbd5\") " Jan 20 23:53:33.339853 kubelet[2940]: I0120 23:53:33.339771 2940 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1331ff2b-5e65-402f-8f27-39c9d0e0fbd5-whisker-backend-key-pair\") pod \"1331ff2b-5e65-402f-8f27-39c9d0e0fbd5\" (UID: \"1331ff2b-5e65-402f-8f27-39c9d0e0fbd5\") " Jan 20 23:53:33.339853 kubelet[2940]: I0120 23:53:33.339805 2940 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1331ff2b-5e65-402f-8f27-39c9d0e0fbd5-whisker-ca-bundle\") pod \"1331ff2b-5e65-402f-8f27-39c9d0e0fbd5\" (UID: \"1331ff2b-5e65-402f-8f27-39c9d0e0fbd5\") " Jan 20 23:53:33.340569 kubelet[2940]: I0120 23:53:33.340146 2940 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/1331ff2b-5e65-402f-8f27-39c9d0e0fbd5-whisker-ca-bundle" (OuterVolumeSpecName: "whisker-ca-bundle") pod "1331ff2b-5e65-402f-8f27-39c9d0e0fbd5" (UID: "1331ff2b-5e65-402f-8f27-39c9d0e0fbd5"). InnerVolumeSpecName "whisker-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 20 23:53:33.344004 kubelet[2940]: I0120 23:53:33.343922 2940 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/1331ff2b-5e65-402f-8f27-39c9d0e0fbd5-whisker-backend-key-pair" (OuterVolumeSpecName: "whisker-backend-key-pair") pod "1331ff2b-5e65-402f-8f27-39c9d0e0fbd5" (UID: "1331ff2b-5e65-402f-8f27-39c9d0e0fbd5"). InnerVolumeSpecName "whisker-backend-key-pair". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 20 23:53:33.344133 kubelet[2940]: I0120 23:53:33.344069 2940 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1331ff2b-5e65-402f-8f27-39c9d0e0fbd5-kube-api-access-9wg2h" (OuterVolumeSpecName: "kube-api-access-9wg2h") pod "1331ff2b-5e65-402f-8f27-39c9d0e0fbd5" (UID: "1331ff2b-5e65-402f-8f27-39c9d0e0fbd5"). InnerVolumeSpecName "kube-api-access-9wg2h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 20 23:53:33.441444 kubelet[2940]: I0120 23:53:33.441369 2940 reconciler_common.go:299] "Volume detached for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/1331ff2b-5e65-402f-8f27-39c9d0e0fbd5-whisker-ca-bundle\") on node \"ci-4547-0-0-n-e5b472a427\" DevicePath \"\"" Jan 20 23:53:33.441444 kubelet[2940]: I0120 23:53:33.441420 2940 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9wg2h\" (UniqueName: \"kubernetes.io/projected/1331ff2b-5e65-402f-8f27-39c9d0e0fbd5-kube-api-access-9wg2h\") on node \"ci-4547-0-0-n-e5b472a427\" DevicePath \"\"" Jan 20 23:53:33.441444 kubelet[2940]: I0120 23:53:33.441433 2940 reconciler_common.go:299] "Volume detached for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/1331ff2b-5e65-402f-8f27-39c9d0e0fbd5-whisker-backend-key-pair\") on node \"ci-4547-0-0-n-e5b472a427\" DevicePath \"\"" Jan 20 23:53:33.554115 systemd[1]: Removed slice kubepods-besteffort-pod1331ff2b_5e65_402f_8f27_39c9d0e0fbd5.slice - libcontainer container kubepods-besteffort-pod1331ff2b_5e65_402f_8f27_39c9d0e0fbd5.slice. Jan 20 23:53:33.611222 systemd[1]: Created slice kubepods-besteffort-pod69215543_8df6_43c3_9b3c_95e532549500.slice - libcontainer container kubepods-besteffort-pod69215543_8df6_43c3_9b3c_95e532549500.slice. Jan 20 23:53:33.642783 kubelet[2940]: I0120 23:53:33.642722 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/69215543-8df6-43c3-9b3c-95e532549500-whisker-ca-bundle\") pod \"whisker-777bdc9c5f-sfvx4\" (UID: \"69215543-8df6-43c3-9b3c-95e532549500\") " pod="calico-system/whisker-777bdc9c5f-sfvx4" Jan 20 23:53:33.642783 kubelet[2940]: I0120 23:53:33.642776 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whisker-backend-key-pair\" (UniqueName: \"kubernetes.io/secret/69215543-8df6-43c3-9b3c-95e532549500-whisker-backend-key-pair\") pod \"whisker-777bdc9c5f-sfvx4\" (UID: \"69215543-8df6-43c3-9b3c-95e532549500\") " pod="calico-system/whisker-777bdc9c5f-sfvx4" Jan 20 23:53:33.642989 kubelet[2940]: I0120 23:53:33.642806 2940 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tgf5q\" (UniqueName: \"kubernetes.io/projected/69215543-8df6-43c3-9b3c-95e532549500-kube-api-access-tgf5q\") pod \"whisker-777bdc9c5f-sfvx4\" (UID: \"69215543-8df6-43c3-9b3c-95e532549500\") " pod="calico-system/whisker-777bdc9c5f-sfvx4" Jan 20 23:53:33.780094 systemd[1]: var-lib-kubelet-pods-1331ff2b\x2d5e65\x2d402f\x2d8f27\x2d39c9d0e0fbd5-volumes-kubernetes.io\x7eprojected-kube\x2dapi\x2daccess\x2d9wg2h.mount: Deactivated successfully. Jan 20 23:53:33.780189 systemd[1]: var-lib-kubelet-pods-1331ff2b\x2d5e65\x2d402f\x2d8f27\x2d39c9d0e0fbd5-volumes-kubernetes.io\x7esecret-whisker\x2dbackend\x2dkey\x2dpair.mount: Deactivated successfully. Jan 20 23:53:33.914782 containerd[1670]: time="2026-01-20T23:53:33.914661716Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-777bdc9c5f-sfvx4,Uid:69215543-8df6-43c3-9b3c-95e532549500,Namespace:calico-system,Attempt:0,}" Jan 20 23:53:34.058559 systemd-networkd[1586]: cali146c8ebe5aa: Link UP Jan 20 23:53:34.058971 systemd-networkd[1586]: cali146c8ebe5aa: Gained carrier Jan 20 23:53:34.082469 containerd[1670]: 2026-01-20 23:53:33.944 [INFO][4098] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 23:53:34.082469 containerd[1670]: 2026-01-20 23:53:33.963 [INFO][4098] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--e5b472a427-k8s-whisker--777bdc9c5f--sfvx4-eth0 whisker-777bdc9c5f- calico-system 69215543-8df6-43c3-9b3c-95e532549500 907 0 2026-01-20 23:53:33 +0000 UTC map[app.kubernetes.io/name:whisker k8s-app:whisker pod-template-hash:777bdc9c5f projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:whisker] map[] [] [] []} {k8s ci-4547-0-0-n-e5b472a427 whisker-777bdc9c5f-sfvx4 eth0 whisker [] [] [kns.calico-system ksa.calico-system.whisker] cali146c8ebe5aa [] [] }} ContainerID="2a0baa96cdfcf0a6ad389a9a3bd60fb609bc25c86b6ee9797e2ec7ba4f342ac1" Namespace="calico-system" Pod="whisker-777bdc9c5f-sfvx4" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-whisker--777bdc9c5f--sfvx4-" Jan 20 23:53:34.082469 containerd[1670]: 2026-01-20 23:53:33.963 [INFO][4098] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="2a0baa96cdfcf0a6ad389a9a3bd60fb609bc25c86b6ee9797e2ec7ba4f342ac1" Namespace="calico-system" Pod="whisker-777bdc9c5f-sfvx4" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-whisker--777bdc9c5f--sfvx4-eth0" Jan 20 23:53:34.082469 containerd[1670]: 2026-01-20 23:53:34.008 [INFO][4112] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="2a0baa96cdfcf0a6ad389a9a3bd60fb609bc25c86b6ee9797e2ec7ba4f342ac1" HandleID="k8s-pod-network.2a0baa96cdfcf0a6ad389a9a3bd60fb609bc25c86b6ee9797e2ec7ba4f342ac1" Workload="ci--4547--0--0--n--e5b472a427-k8s-whisker--777bdc9c5f--sfvx4-eth0" Jan 20 23:53:34.082714 containerd[1670]: 2026-01-20 23:53:34.008 [INFO][4112] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="2a0baa96cdfcf0a6ad389a9a3bd60fb609bc25c86b6ee9797e2ec7ba4f342ac1" HandleID="k8s-pod-network.2a0baa96cdfcf0a6ad389a9a3bd60fb609bc25c86b6ee9797e2ec7ba4f342ac1" Workload="ci--4547--0--0--n--e5b472a427-k8s-whisker--777bdc9c5f--sfvx4-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004d9e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-n-e5b472a427", "pod":"whisker-777bdc9c5f-sfvx4", "timestamp":"2026-01-20 23:53:34.008171223 +0000 UTC"}, Hostname:"ci-4547-0-0-n-e5b472a427", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 23:53:34.082714 containerd[1670]: 2026-01-20 23:53:34.008 [INFO][4112] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 23:53:34.082714 containerd[1670]: 2026-01-20 23:53:34.008 [INFO][4112] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 23:53:34.082714 containerd[1670]: 2026-01-20 23:53:34.008 [INFO][4112] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-e5b472a427' Jan 20 23:53:34.082714 containerd[1670]: 2026-01-20 23:53:34.018 [INFO][4112] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.2a0baa96cdfcf0a6ad389a9a3bd60fb609bc25c86b6ee9797e2ec7ba4f342ac1" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:34.082714 containerd[1670]: 2026-01-20 23:53:34.024 [INFO][4112] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:34.082714 containerd[1670]: 2026-01-20 23:53:34.029 [INFO][4112] ipam/ipam.go 511: Trying affinity for 192.168.16.192/26 host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:34.082714 containerd[1670]: 2026-01-20 23:53:34.031 [INFO][4112] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.192/26 host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:34.082714 containerd[1670]: 2026-01-20 23:53:34.033 [INFO][4112] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:34.082906 containerd[1670]: 2026-01-20 23:53:34.033 [INFO][4112] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.2a0baa96cdfcf0a6ad389a9a3bd60fb609bc25c86b6ee9797e2ec7ba4f342ac1" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:34.082906 containerd[1670]: 2026-01-20 23:53:34.034 [INFO][4112] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.2a0baa96cdfcf0a6ad389a9a3bd60fb609bc25c86b6ee9797e2ec7ba4f342ac1 Jan 20 23:53:34.082906 containerd[1670]: 2026-01-20 23:53:34.042 [INFO][4112] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.2a0baa96cdfcf0a6ad389a9a3bd60fb609bc25c86b6ee9797e2ec7ba4f342ac1" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:34.082906 containerd[1670]: 2026-01-20 23:53:34.048 [INFO][4112] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.16.193/26] block=192.168.16.192/26 handle="k8s-pod-network.2a0baa96cdfcf0a6ad389a9a3bd60fb609bc25c86b6ee9797e2ec7ba4f342ac1" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:34.082906 containerd[1670]: 2026-01-20 23:53:34.048 [INFO][4112] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.193/26] handle="k8s-pod-network.2a0baa96cdfcf0a6ad389a9a3bd60fb609bc25c86b6ee9797e2ec7ba4f342ac1" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:34.082906 containerd[1670]: 2026-01-20 23:53:34.048 [INFO][4112] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 23:53:34.082906 containerd[1670]: 2026-01-20 23:53:34.048 [INFO][4112] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.16.193/26] IPv6=[] ContainerID="2a0baa96cdfcf0a6ad389a9a3bd60fb609bc25c86b6ee9797e2ec7ba4f342ac1" HandleID="k8s-pod-network.2a0baa96cdfcf0a6ad389a9a3bd60fb609bc25c86b6ee9797e2ec7ba4f342ac1" Workload="ci--4547--0--0--n--e5b472a427-k8s-whisker--777bdc9c5f--sfvx4-eth0" Jan 20 23:53:34.083038 containerd[1670]: 2026-01-20 23:53:34.051 [INFO][4098] cni-plugin/k8s.go 418: Populated endpoint ContainerID="2a0baa96cdfcf0a6ad389a9a3bd60fb609bc25c86b6ee9797e2ec7ba4f342ac1" Namespace="calico-system" Pod="whisker-777bdc9c5f-sfvx4" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-whisker--777bdc9c5f--sfvx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--e5b472a427-k8s-whisker--777bdc9c5f--sfvx4-eth0", GenerateName:"whisker-777bdc9c5f-", Namespace:"calico-system", SelfLink:"", UID:"69215543-8df6-43c3-9b3c-95e532549500", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 23, 53, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"777bdc9c5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-e5b472a427", ContainerID:"", Pod:"whisker-777bdc9c5f-sfvx4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.16.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali146c8ebe5aa", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 23:53:34.083038 containerd[1670]: 2026-01-20 23:53:34.051 [INFO][4098] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.193/32] ContainerID="2a0baa96cdfcf0a6ad389a9a3bd60fb609bc25c86b6ee9797e2ec7ba4f342ac1" Namespace="calico-system" Pod="whisker-777bdc9c5f-sfvx4" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-whisker--777bdc9c5f--sfvx4-eth0" Jan 20 23:53:34.083108 containerd[1670]: 2026-01-20 23:53:34.051 [INFO][4098] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali146c8ebe5aa ContainerID="2a0baa96cdfcf0a6ad389a9a3bd60fb609bc25c86b6ee9797e2ec7ba4f342ac1" Namespace="calico-system" Pod="whisker-777bdc9c5f-sfvx4" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-whisker--777bdc9c5f--sfvx4-eth0" Jan 20 23:53:34.083108 containerd[1670]: 2026-01-20 23:53:34.059 [INFO][4098] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="2a0baa96cdfcf0a6ad389a9a3bd60fb609bc25c86b6ee9797e2ec7ba4f342ac1" Namespace="calico-system" Pod="whisker-777bdc9c5f-sfvx4" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-whisker--777bdc9c5f--sfvx4-eth0" Jan 20 23:53:34.083148 containerd[1670]: 2026-01-20 23:53:34.060 [INFO][4098] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="2a0baa96cdfcf0a6ad389a9a3bd60fb609bc25c86b6ee9797e2ec7ba4f342ac1" Namespace="calico-system" Pod="whisker-777bdc9c5f-sfvx4" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-whisker--777bdc9c5f--sfvx4-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--e5b472a427-k8s-whisker--777bdc9c5f--sfvx4-eth0", GenerateName:"whisker-777bdc9c5f-", Namespace:"calico-system", SelfLink:"", UID:"69215543-8df6-43c3-9b3c-95e532549500", ResourceVersion:"907", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 23, 53, 33, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"whisker", "k8s-app":"whisker", "pod-template-hash":"777bdc9c5f", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"whisker"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-e5b472a427", ContainerID:"2a0baa96cdfcf0a6ad389a9a3bd60fb609bc25c86b6ee9797e2ec7ba4f342ac1", Pod:"whisker-777bdc9c5f-sfvx4", Endpoint:"eth0", ServiceAccountName:"whisker", IPNetworks:[]string{"192.168.16.193/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.whisker"}, InterfaceName:"cali146c8ebe5aa", MAC:"ba:0e:2e:68:04:2d", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 23:53:34.083194 containerd[1670]: 2026-01-20 23:53:34.077 [INFO][4098] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="2a0baa96cdfcf0a6ad389a9a3bd60fb609bc25c86b6ee9797e2ec7ba4f342ac1" Namespace="calico-system" Pod="whisker-777bdc9c5f-sfvx4" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-whisker--777bdc9c5f--sfvx4-eth0" Jan 20 23:53:34.130385 containerd[1670]: time="2026-01-20T23:53:34.130298812Z" level=info msg="connecting to shim 2a0baa96cdfcf0a6ad389a9a3bd60fb609bc25c86b6ee9797e2ec7ba4f342ac1" address="unix:///run/containerd/s/79c02fa4c8d5fdacc74faa29fe5ea9297addb1d85147bb2bf294cf2b14b597a0" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:53:34.131357 kubelet[2940]: I0120 23:53:34.131193 2940 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="1331ff2b-5e65-402f-8f27-39c9d0e0fbd5" path="/var/lib/kubelet/pods/1331ff2b-5e65-402f-8f27-39c9d0e0fbd5/volumes" Jan 20 23:53:34.158931 systemd[1]: Started cri-containerd-2a0baa96cdfcf0a6ad389a9a3bd60fb609bc25c86b6ee9797e2ec7ba4f342ac1.scope - libcontainer container 2a0baa96cdfcf0a6ad389a9a3bd60fb609bc25c86b6ee9797e2ec7ba4f342ac1. Jan 20 23:53:34.171000 audit: BPF prog-id=175 op=LOAD Jan 20 23:53:34.171000 audit: BPF prog-id=176 op=LOAD Jan 20 23:53:34.171000 audit[4147]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4136 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:34.171000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261306261613936636466636630613661643338396139613362643630 Jan 20 23:53:34.172000 audit: BPF prog-id=176 op=UNLOAD Jan 20 23:53:34.172000 audit[4147]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4136 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:34.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261306261613936636466636630613661643338396139613362643630 Jan 20 23:53:34.172000 audit: BPF prog-id=177 op=LOAD Jan 20 23:53:34.172000 audit[4147]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4136 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:34.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261306261613936636466636630613661643338396139613362643630 Jan 20 23:53:34.172000 audit: BPF prog-id=178 op=LOAD Jan 20 23:53:34.172000 audit[4147]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=22 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4136 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:34.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261306261613936636466636630613661643338396139613362643630 Jan 20 23:53:34.172000 audit: BPF prog-id=178 op=UNLOAD Jan 20 23:53:34.172000 audit[4147]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=16 a1=0 a2=0 a3=0 items=0 ppid=4136 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:34.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261306261613936636466636630613661643338396139613362643630 Jan 20 23:53:34.172000 audit: BPF prog-id=177 op=UNLOAD Jan 20 23:53:34.172000 audit[4147]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=14 a1=0 a2=0 a3=0 items=0 ppid=4136 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:34.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261306261613936636466636630613661643338396139613362643630 Jan 20 23:53:34.172000 audit: BPF prog-id=179 op=LOAD Jan 20 23:53:34.172000 audit[4147]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=20 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4136 pid=4147 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:34.172000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3261306261613936636466636630613661643338396139613362643630 Jan 20 23:53:34.197326 containerd[1670]: time="2026-01-20T23:53:34.197266084Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:whisker-777bdc9c5f-sfvx4,Uid:69215543-8df6-43c3-9b3c-95e532549500,Namespace:calico-system,Attempt:0,} returns sandbox id \"2a0baa96cdfcf0a6ad389a9a3bd60fb609bc25c86b6ee9797e2ec7ba4f342ac1\"" Jan 20 23:53:34.199032 containerd[1670]: time="2026-01-20T23:53:34.198978089Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 23:53:34.521256 containerd[1670]: time="2026-01-20T23:53:34.521198730Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:53:34.522786 containerd[1670]: time="2026-01-20T23:53:34.522734374Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 23:53:34.522903 containerd[1670]: time="2026-01-20T23:53:34.522834214Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 23:53:34.523472 kubelet[2940]: E0120 23:53:34.523104 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 23:53:34.524073 kubelet[2940]: E0120 23:53:34.523821 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 23:53:34.524128 kubelet[2940]: E0120 23:53:34.524005 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:149aa4975ca74b8e859dd8df0df1ec0f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tgf5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-777bdc9c5f-sfvx4_calico-system(69215543-8df6-43c3-9b3c-95e532549500): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 23:53:34.526884 containerd[1670]: time="2026-01-20T23:53:34.526789626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 23:53:34.865391 containerd[1670]: time="2026-01-20T23:53:34.865165873Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:53:34.867435 containerd[1670]: time="2026-01-20T23:53:34.867365359Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 23:53:34.867817 containerd[1670]: time="2026-01-20T23:53:34.867427040Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 23:53:34.867854 kubelet[2940]: E0120 23:53:34.867767 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 23:53:34.867854 kubelet[2940]: E0120 23:53:34.867818 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 23:53:34.867986 kubelet[2940]: E0120 23:53:34.867938 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgf5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-777bdc9c5f-sfvx4_calico-system(69215543-8df6-43c3-9b3c-95e532549500): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 23:53:34.869483 kubelet[2940]: E0120 23:53:34.869155 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-777bdc9c5f-sfvx4" podUID="69215543-8df6-43c3-9b3c-95e532549500" Jan 20 23:53:35.256608 kubelet[2940]: E0120 23:53:35.256560 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-777bdc9c5f-sfvx4" podUID="69215543-8df6-43c3-9b3c-95e532549500" Jan 20 23:53:35.273000 audit[4300]: NETFILTER_CFG table=filter:119 family=2 entries=22 op=nft_register_rule pid=4300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:35.273000 audit[4300]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffcad98620 a2=0 a3=1 items=0 ppid=3104 pid=4300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:35.273000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:35.279000 audit[4300]: NETFILTER_CFG table=nat:120 family=2 entries=12 op=nft_register_rule pid=4300 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:35.279000 audit[4300]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffcad98620 a2=0 a3=1 items=0 ppid=3104 pid=4300 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:35.279000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:35.766863 systemd-networkd[1586]: cali146c8ebe5aa: Gained IPv6LL Jan 20 23:53:40.125956 containerd[1670]: time="2026-01-20T23:53:40.125869513Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tzdxw,Uid:9ba157f6-cf6d-4c61-afb6-496bc0f9fb0e,Namespace:kube-system,Attempt:0,}" Jan 20 23:53:40.126362 containerd[1670]: time="2026-01-20T23:53:40.126214394Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-767c66c85d-nfdlb,Uid:0c4c02eb-8f55-425f-9205-33b16803b19e,Namespace:calico-apiserver,Attempt:0,}" Jan 20 23:53:40.126362 containerd[1670]: time="2026-01-20T23:53:40.126307074Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596dccccb4-mz7rs,Uid:a2e09f63-e3b2-438b-a6ce-d2eefff60f3e,Namespace:calico-system,Attempt:0,}" Jan 20 23:53:40.126443 containerd[1670]: time="2026-01-20T23:53:40.126387154Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t9hcb,Uid:aa5babee-7c4f-49df-8be7-6dbca8594e57,Namespace:kube-system,Attempt:0,}" Jan 20 23:53:40.279432 systemd-networkd[1586]: cali078793afb89: Link UP Jan 20 23:53:40.280073 systemd-networkd[1586]: cali078793afb89: Gained carrier Jan 20 23:53:40.294795 containerd[1670]: 2026-01-20 23:53:40.185 [INFO][4446] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 23:53:40.294795 containerd[1670]: 2026-01-20 23:53:40.201 [INFO][4446] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--e5b472a427-k8s-calico--apiserver--767c66c85d--nfdlb-eth0 calico-apiserver-767c66c85d- calico-apiserver 0c4c02eb-8f55-425f-9205-33b16803b19e 843 0 2026-01-20 23:53:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:767c66c85d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547-0-0-n-e5b472a427 calico-apiserver-767c66c85d-nfdlb eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali078793afb89 [] [] }} ContainerID="59dbcb0deab6c717ea2fc258eae5694a5628624f621b3ee674bd5f5020b2671a" Namespace="calico-apiserver" Pod="calico-apiserver-767c66c85d-nfdlb" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-calico--apiserver--767c66c85d--nfdlb-" Jan 20 23:53:40.294795 containerd[1670]: 2026-01-20 23:53:40.201 [INFO][4446] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="59dbcb0deab6c717ea2fc258eae5694a5628624f621b3ee674bd5f5020b2671a" Namespace="calico-apiserver" Pod="calico-apiserver-767c66c85d-nfdlb" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-calico--apiserver--767c66c85d--nfdlb-eth0" Jan 20 23:53:40.294795 containerd[1670]: 2026-01-20 23:53:40.232 [INFO][4495] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="59dbcb0deab6c717ea2fc258eae5694a5628624f621b3ee674bd5f5020b2671a" HandleID="k8s-pod-network.59dbcb0deab6c717ea2fc258eae5694a5628624f621b3ee674bd5f5020b2671a" Workload="ci--4547--0--0--n--e5b472a427-k8s-calico--apiserver--767c66c85d--nfdlb-eth0" Jan 20 23:53:40.295077 containerd[1670]: 2026-01-20 23:53:40.232 [INFO][4495] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="59dbcb0deab6c717ea2fc258eae5694a5628624f621b3ee674bd5f5020b2671a" HandleID="k8s-pod-network.59dbcb0deab6c717ea2fc258eae5694a5628624f621b3ee674bd5f5020b2671a" Workload="ci--4547--0--0--n--e5b472a427-k8s-calico--apiserver--767c66c85d--nfdlb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000515280), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547-0-0-n-e5b472a427", "pod":"calico-apiserver-767c66c85d-nfdlb", "timestamp":"2026-01-20 23:53:40.232399697 +0000 UTC"}, Hostname:"ci-4547-0-0-n-e5b472a427", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 23:53:40.295077 containerd[1670]: 2026-01-20 23:53:40.232 [INFO][4495] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 23:53:40.295077 containerd[1670]: 2026-01-20 23:53:40.232 [INFO][4495] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 23:53:40.295077 containerd[1670]: 2026-01-20 23:53:40.232 [INFO][4495] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-e5b472a427' Jan 20 23:53:40.295077 containerd[1670]: 2026-01-20 23:53:40.244 [INFO][4495] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.59dbcb0deab6c717ea2fc258eae5694a5628624f621b3ee674bd5f5020b2671a" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.295077 containerd[1670]: 2026-01-20 23:53:40.251 [INFO][4495] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.295077 containerd[1670]: 2026-01-20 23:53:40.257 [INFO][4495] ipam/ipam.go 511: Trying affinity for 192.168.16.192/26 host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.295077 containerd[1670]: 2026-01-20 23:53:40.259 [INFO][4495] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.192/26 host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.295077 containerd[1670]: 2026-01-20 23:53:40.261 [INFO][4495] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.295343 containerd[1670]: 2026-01-20 23:53:40.261 [INFO][4495] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.59dbcb0deab6c717ea2fc258eae5694a5628624f621b3ee674bd5f5020b2671a" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.295343 containerd[1670]: 2026-01-20 23:53:40.263 [INFO][4495] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.59dbcb0deab6c717ea2fc258eae5694a5628624f621b3ee674bd5f5020b2671a Jan 20 23:53:40.295343 containerd[1670]: 2026-01-20 23:53:40.268 [INFO][4495] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.59dbcb0deab6c717ea2fc258eae5694a5628624f621b3ee674bd5f5020b2671a" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.295343 containerd[1670]: 2026-01-20 23:53:40.275 [INFO][4495] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.16.194/26] block=192.168.16.192/26 handle="k8s-pod-network.59dbcb0deab6c717ea2fc258eae5694a5628624f621b3ee674bd5f5020b2671a" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.295343 containerd[1670]: 2026-01-20 23:53:40.275 [INFO][4495] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.194/26] handle="k8s-pod-network.59dbcb0deab6c717ea2fc258eae5694a5628624f621b3ee674bd5f5020b2671a" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.295343 containerd[1670]: 2026-01-20 23:53:40.275 [INFO][4495] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 23:53:40.295343 containerd[1670]: 2026-01-20 23:53:40.275 [INFO][4495] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.16.194/26] IPv6=[] ContainerID="59dbcb0deab6c717ea2fc258eae5694a5628624f621b3ee674bd5f5020b2671a" HandleID="k8s-pod-network.59dbcb0deab6c717ea2fc258eae5694a5628624f621b3ee674bd5f5020b2671a" Workload="ci--4547--0--0--n--e5b472a427-k8s-calico--apiserver--767c66c85d--nfdlb-eth0" Jan 20 23:53:40.295531 containerd[1670]: 2026-01-20 23:53:40.277 [INFO][4446] cni-plugin/k8s.go 418: Populated endpoint ContainerID="59dbcb0deab6c717ea2fc258eae5694a5628624f621b3ee674bd5f5020b2671a" Namespace="calico-apiserver" Pod="calico-apiserver-767c66c85d-nfdlb" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-calico--apiserver--767c66c85d--nfdlb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--e5b472a427-k8s-calico--apiserver--767c66c85d--nfdlb-eth0", GenerateName:"calico-apiserver-767c66c85d-", Namespace:"calico-apiserver", SelfLink:"", UID:"0c4c02eb-8f55-425f-9205-33b16803b19e", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 23, 53, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"767c66c85d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-e5b472a427", ContainerID:"", Pod:"calico-apiserver-767c66c85d-nfdlb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali078793afb89", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 23:53:40.295608 containerd[1670]: 2026-01-20 23:53:40.278 [INFO][4446] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.194/32] ContainerID="59dbcb0deab6c717ea2fc258eae5694a5628624f621b3ee674bd5f5020b2671a" Namespace="calico-apiserver" Pod="calico-apiserver-767c66c85d-nfdlb" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-calico--apiserver--767c66c85d--nfdlb-eth0" Jan 20 23:53:40.295608 containerd[1670]: 2026-01-20 23:53:40.278 [INFO][4446] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali078793afb89 ContainerID="59dbcb0deab6c717ea2fc258eae5694a5628624f621b3ee674bd5f5020b2671a" Namespace="calico-apiserver" Pod="calico-apiserver-767c66c85d-nfdlb" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-calico--apiserver--767c66c85d--nfdlb-eth0" Jan 20 23:53:40.295608 containerd[1670]: 2026-01-20 23:53:40.280 [INFO][4446] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="59dbcb0deab6c717ea2fc258eae5694a5628624f621b3ee674bd5f5020b2671a" Namespace="calico-apiserver" Pod="calico-apiserver-767c66c85d-nfdlb" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-calico--apiserver--767c66c85d--nfdlb-eth0" Jan 20 23:53:40.295697 containerd[1670]: 2026-01-20 23:53:40.280 [INFO][4446] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="59dbcb0deab6c717ea2fc258eae5694a5628624f621b3ee674bd5f5020b2671a" Namespace="calico-apiserver" Pod="calico-apiserver-767c66c85d-nfdlb" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-calico--apiserver--767c66c85d--nfdlb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--e5b472a427-k8s-calico--apiserver--767c66c85d--nfdlb-eth0", GenerateName:"calico-apiserver-767c66c85d-", Namespace:"calico-apiserver", SelfLink:"", UID:"0c4c02eb-8f55-425f-9205-33b16803b19e", ResourceVersion:"843", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 23, 53, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"767c66c85d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-e5b472a427", ContainerID:"59dbcb0deab6c717ea2fc258eae5694a5628624f621b3ee674bd5f5020b2671a", Pod:"calico-apiserver-767c66c85d-nfdlb", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.194/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali078793afb89", MAC:"8a:cc:10:c6:75:2e", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 23:53:40.295803 containerd[1670]: 2026-01-20 23:53:40.293 [INFO][4446] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="59dbcb0deab6c717ea2fc258eae5694a5628624f621b3ee674bd5f5020b2671a" Namespace="calico-apiserver" Pod="calico-apiserver-767c66c85d-nfdlb" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-calico--apiserver--767c66c85d--nfdlb-eth0" Jan 20 23:53:40.313967 containerd[1670]: time="2026-01-20T23:53:40.313907130Z" level=info msg="connecting to shim 59dbcb0deab6c717ea2fc258eae5694a5628624f621b3ee674bd5f5020b2671a" address="unix:///run/containerd/s/9c233b3f286bdf7a5e7f77fe4640d094cd74efaf6faa2da87aae55fdc8fa340c" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:53:40.336663 systemd[1]: Started cri-containerd-59dbcb0deab6c717ea2fc258eae5694a5628624f621b3ee674bd5f5020b2671a.scope - libcontainer container 59dbcb0deab6c717ea2fc258eae5694a5628624f621b3ee674bd5f5020b2671a. Jan 20 23:53:40.347000 audit: BPF prog-id=180 op=LOAD Jan 20 23:53:40.349041 kernel: kauditd_printk_skb: 33 callbacks suppressed Jan 20 23:53:40.349089 kernel: audit: type=1334 audit(1768953220.347:588): prog-id=180 op=LOAD Jan 20 23:53:40.348000 audit: BPF prog-id=181 op=LOAD Jan 20 23:53:40.348000 audit[4549]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4538 pid=4549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.353595 kernel: audit: type=1334 audit(1768953220.348:589): prog-id=181 op=LOAD Jan 20 23:53:40.353723 kernel: audit: type=1300 audit(1768953220.348:589): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4538 pid=4549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.353788 kernel: audit: type=1327 audit(1768953220.348:589): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539646263623064656162366337313765613266633235386561653536 Jan 20 23:53:40.348000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539646263623064656162366337313765613266633235386561653536 Jan 20 23:53:40.349000 audit: BPF prog-id=181 op=UNLOAD Jan 20 23:53:40.357581 kernel: audit: type=1334 audit(1768953220.349:590): prog-id=181 op=UNLOAD Jan 20 23:53:40.357618 kernel: audit: type=1300 audit(1768953220.349:590): arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4538 pid=4549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.349000 audit[4549]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4538 pid=4549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539646263623064656162366337313765613266633235386561653536 Jan 20 23:53:40.363948 kernel: audit: type=1327 audit(1768953220.349:590): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539646263623064656162366337313765613266633235386561653536 Jan 20 23:53:40.349000 audit: BPF prog-id=182 op=LOAD Jan 20 23:53:40.365044 kernel: audit: type=1334 audit(1768953220.349:591): prog-id=182 op=LOAD Jan 20 23:53:40.349000 audit[4549]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4538 pid=4549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.368327 kernel: audit: type=1300 audit(1768953220.349:591): arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4538 pid=4549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.368522 kernel: audit: type=1327 audit(1768953220.349:591): proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539646263623064656162366337313765613266633235386561653536 Jan 20 23:53:40.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539646263623064656162366337313765613266633235386561653536 Jan 20 23:53:40.349000 audit: BPF prog-id=183 op=LOAD Jan 20 23:53:40.349000 audit[4549]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4538 pid=4549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.349000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539646263623064656162366337313765613266633235386561653536 Jan 20 23:53:40.353000 audit: BPF prog-id=183 op=UNLOAD Jan 20 23:53:40.353000 audit[4549]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4538 pid=4549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539646263623064656162366337313765613266633235386561653536 Jan 20 23:53:40.353000 audit: BPF prog-id=182 op=UNLOAD Jan 20 23:53:40.353000 audit[4549]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4538 pid=4549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539646263623064656162366337313765613266633235386561653536 Jan 20 23:53:40.353000 audit: BPF prog-id=184 op=LOAD Jan 20 23:53:40.353000 audit[4549]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4538 pid=4549 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.353000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3539646263623064656162366337313765613266633235386561653536 Jan 20 23:53:40.389631 containerd[1670]: time="2026-01-20T23:53:40.389525627Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-767c66c85d-nfdlb,Uid:0c4c02eb-8f55-425f-9205-33b16803b19e,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"59dbcb0deab6c717ea2fc258eae5694a5628624f621b3ee674bd5f5020b2671a\"" Jan 20 23:53:40.392466 containerd[1670]: time="2026-01-20T23:53:40.392410955Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 23:53:40.397523 systemd-networkd[1586]: cali990ded0a686: Link UP Jan 20 23:53:40.397869 systemd-networkd[1586]: cali990ded0a686: Gained carrier Jan 20 23:53:40.412584 containerd[1670]: 2026-01-20 23:53:40.188 [INFO][4469] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 23:53:40.412584 containerd[1670]: 2026-01-20 23:53:40.205 [INFO][4469] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--e5b472a427-k8s-coredns--674b8bbfcf--t9hcb-eth0 coredns-674b8bbfcf- kube-system aa5babee-7c4f-49df-8be7-6dbca8594e57 842 0 2026-01-20 23:53:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547-0-0-n-e5b472a427 coredns-674b8bbfcf-t9hcb eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali990ded0a686 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="c34de7ffff12b4d25b81478f592d12eed18e7be8502158d685b81332098f08a3" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9hcb" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-coredns--674b8bbfcf--t9hcb-" Jan 20 23:53:40.412584 containerd[1670]: 2026-01-20 23:53:40.205 [INFO][4469] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="c34de7ffff12b4d25b81478f592d12eed18e7be8502158d685b81332098f08a3" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9hcb" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-coredns--674b8bbfcf--t9hcb-eth0" Jan 20 23:53:40.412584 containerd[1670]: 2026-01-20 23:53:40.233 [INFO][4497] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="c34de7ffff12b4d25b81478f592d12eed18e7be8502158d685b81332098f08a3" HandleID="k8s-pod-network.c34de7ffff12b4d25b81478f592d12eed18e7be8502158d685b81332098f08a3" Workload="ci--4547--0--0--n--e5b472a427-k8s-coredns--674b8bbfcf--t9hcb-eth0" Jan 20 23:53:40.412804 containerd[1670]: 2026-01-20 23:53:40.233 [INFO][4497] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="c34de7ffff12b4d25b81478f592d12eed18e7be8502158d685b81332098f08a3" HandleID="k8s-pod-network.c34de7ffff12b4d25b81478f592d12eed18e7be8502158d685b81332098f08a3" Workload="ci--4547--0--0--n--e5b472a427-k8s-coredns--674b8bbfcf--t9hcb-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002d4fd0), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547-0-0-n-e5b472a427", "pod":"coredns-674b8bbfcf-t9hcb", "timestamp":"2026-01-20 23:53:40.233123859 +0000 UTC"}, Hostname:"ci-4547-0-0-n-e5b472a427", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 23:53:40.412804 containerd[1670]: 2026-01-20 23:53:40.233 [INFO][4497] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 23:53:40.412804 containerd[1670]: 2026-01-20 23:53:40.276 [INFO][4497] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 23:53:40.412804 containerd[1670]: 2026-01-20 23:53:40.276 [INFO][4497] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-e5b472a427' Jan 20 23:53:40.412804 containerd[1670]: 2026-01-20 23:53:40.345 [INFO][4497] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.c34de7ffff12b4d25b81478f592d12eed18e7be8502158d685b81332098f08a3" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.412804 containerd[1670]: 2026-01-20 23:53:40.357 [INFO][4497] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.412804 containerd[1670]: 2026-01-20 23:53:40.369 [INFO][4497] ipam/ipam.go 511: Trying affinity for 192.168.16.192/26 host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.412804 containerd[1670]: 2026-01-20 23:53:40.373 [INFO][4497] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.192/26 host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.412804 containerd[1670]: 2026-01-20 23:53:40.378 [INFO][4497] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.413005 containerd[1670]: 2026-01-20 23:53:40.378 [INFO][4497] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.c34de7ffff12b4d25b81478f592d12eed18e7be8502158d685b81332098f08a3" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.413005 containerd[1670]: 2026-01-20 23:53:40.380 [INFO][4497] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.c34de7ffff12b4d25b81478f592d12eed18e7be8502158d685b81332098f08a3 Jan 20 23:53:40.413005 containerd[1670]: 2026-01-20 23:53:40.385 [INFO][4497] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.c34de7ffff12b4d25b81478f592d12eed18e7be8502158d685b81332098f08a3" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.413005 containerd[1670]: 2026-01-20 23:53:40.392 [INFO][4497] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.16.195/26] block=192.168.16.192/26 handle="k8s-pod-network.c34de7ffff12b4d25b81478f592d12eed18e7be8502158d685b81332098f08a3" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.413005 containerd[1670]: 2026-01-20 23:53:40.392 [INFO][4497] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.195/26] handle="k8s-pod-network.c34de7ffff12b4d25b81478f592d12eed18e7be8502158d685b81332098f08a3" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.413005 containerd[1670]: 2026-01-20 23:53:40.392 [INFO][4497] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 23:53:40.413005 containerd[1670]: 2026-01-20 23:53:40.392 [INFO][4497] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.16.195/26] IPv6=[] ContainerID="c34de7ffff12b4d25b81478f592d12eed18e7be8502158d685b81332098f08a3" HandleID="k8s-pod-network.c34de7ffff12b4d25b81478f592d12eed18e7be8502158d685b81332098f08a3" Workload="ci--4547--0--0--n--e5b472a427-k8s-coredns--674b8bbfcf--t9hcb-eth0" Jan 20 23:53:40.413133 containerd[1670]: 2026-01-20 23:53:40.394 [INFO][4469] cni-plugin/k8s.go 418: Populated endpoint ContainerID="c34de7ffff12b4d25b81478f592d12eed18e7be8502158d685b81332098f08a3" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9hcb" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-coredns--674b8bbfcf--t9hcb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--e5b472a427-k8s-coredns--674b8bbfcf--t9hcb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"aa5babee-7c4f-49df-8be7-6dbca8594e57", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 23, 53, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-e5b472a427", ContainerID:"", Pod:"coredns-674b8bbfcf-t9hcb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali990ded0a686", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 23:53:40.413133 containerd[1670]: 2026-01-20 23:53:40.395 [INFO][4469] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.195/32] ContainerID="c34de7ffff12b4d25b81478f592d12eed18e7be8502158d685b81332098f08a3" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9hcb" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-coredns--674b8bbfcf--t9hcb-eth0" Jan 20 23:53:40.413133 containerd[1670]: 2026-01-20 23:53:40.395 [INFO][4469] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali990ded0a686 ContainerID="c34de7ffff12b4d25b81478f592d12eed18e7be8502158d685b81332098f08a3" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9hcb" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-coredns--674b8bbfcf--t9hcb-eth0" Jan 20 23:53:40.413133 containerd[1670]: 2026-01-20 23:53:40.397 [INFO][4469] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="c34de7ffff12b4d25b81478f592d12eed18e7be8502158d685b81332098f08a3" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9hcb" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-coredns--674b8bbfcf--t9hcb-eth0" Jan 20 23:53:40.413133 containerd[1670]: 2026-01-20 23:53:40.398 [INFO][4469] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="c34de7ffff12b4d25b81478f592d12eed18e7be8502158d685b81332098f08a3" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9hcb" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-coredns--674b8bbfcf--t9hcb-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--e5b472a427-k8s-coredns--674b8bbfcf--t9hcb-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"aa5babee-7c4f-49df-8be7-6dbca8594e57", ResourceVersion:"842", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 23, 53, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-e5b472a427", ContainerID:"c34de7ffff12b4d25b81478f592d12eed18e7be8502158d685b81332098f08a3", Pod:"coredns-674b8bbfcf-t9hcb", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.195/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali990ded0a686", MAC:"7a:19:75:b5:e7:c0", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 23:53:40.413133 containerd[1670]: 2026-01-20 23:53:40.410 [INFO][4469] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="c34de7ffff12b4d25b81478f592d12eed18e7be8502158d685b81332098f08a3" Namespace="kube-system" Pod="coredns-674b8bbfcf-t9hcb" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-coredns--674b8bbfcf--t9hcb-eth0" Jan 20 23:53:40.434032 containerd[1670]: time="2026-01-20T23:53:40.433967874Z" level=info msg="connecting to shim c34de7ffff12b4d25b81478f592d12eed18e7be8502158d685b81332098f08a3" address="unix:///run/containerd/s/9af5422c963223e4e849d5f691da450382d8f64c2375fc8505ab288177a831be" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:53:40.460895 systemd[1]: Started cri-containerd-c34de7ffff12b4d25b81478f592d12eed18e7be8502158d685b81332098f08a3.scope - libcontainer container c34de7ffff12b4d25b81478f592d12eed18e7be8502158d685b81332098f08a3. Jan 20 23:53:40.471000 audit: BPF prog-id=185 op=LOAD Jan 20 23:53:40.472000 audit: BPF prog-id=186 op=LOAD Jan 20 23:53:40.472000 audit[4600]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130180 a2=98 a3=0 items=0 ppid=4587 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.472000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333346465376666666631326234643235623831343738663539326431 Jan 20 23:53:40.472000 audit: BPF prog-id=186 op=UNLOAD Jan 20 23:53:40.472000 audit[4600]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4587 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.472000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333346465376666666631326234643235623831343738663539326431 Jan 20 23:53:40.473000 audit: BPF prog-id=187 op=LOAD Jan 20 23:53:40.473000 audit[4600]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001303e8 a2=98 a3=0 items=0 ppid=4587 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.473000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333346465376666666631326234643235623831343738663539326431 Jan 20 23:53:40.473000 audit: BPF prog-id=188 op=LOAD Jan 20 23:53:40.473000 audit[4600]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000130168 a2=98 a3=0 items=0 ppid=4587 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.473000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333346465376666666631326234643235623831343738663539326431 Jan 20 23:53:40.473000 audit: BPF prog-id=188 op=UNLOAD Jan 20 23:53:40.473000 audit[4600]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4587 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.473000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333346465376666666631326234643235623831343738663539326431 Jan 20 23:53:40.473000 audit: BPF prog-id=187 op=UNLOAD Jan 20 23:53:40.473000 audit[4600]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4587 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.473000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333346465376666666631326234643235623831343738663539326431 Jan 20 23:53:40.473000 audit: BPF prog-id=189 op=LOAD Jan 20 23:53:40.473000 audit[4600]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000130648 a2=98 a3=0 items=0 ppid=4587 pid=4600 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.473000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6333346465376666666631326234643235623831343738663539326431 Jan 20 23:53:40.493636 systemd-networkd[1586]: calie9e7c66eb36: Link UP Jan 20 23:53:40.494010 systemd-networkd[1586]: calie9e7c66eb36: Gained carrier Jan 20 23:53:40.502938 containerd[1670]: time="2026-01-20T23:53:40.502881631Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-t9hcb,Uid:aa5babee-7c4f-49df-8be7-6dbca8594e57,Namespace:kube-system,Attempt:0,} returns sandbox id \"c34de7ffff12b4d25b81478f592d12eed18e7be8502158d685b81332098f08a3\"" Jan 20 23:53:40.511472 containerd[1670]: 2026-01-20 23:53:40.171 [INFO][4429] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 23:53:40.511472 containerd[1670]: 2026-01-20 23:53:40.190 [INFO][4429] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--e5b472a427-k8s-calico--kube--controllers--596dccccb4--mz7rs-eth0 calico-kube-controllers-596dccccb4- calico-system a2e09f63-e3b2-438b-a6ce-d2eefff60f3e 838 0 2026-01-20 23:53:22 +0000 UTC map[app.kubernetes.io/name:calico-kube-controllers k8s-app:calico-kube-controllers pod-template-hash:596dccccb4 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-kube-controllers] map[] [] [] []} {k8s ci-4547-0-0-n-e5b472a427 calico-kube-controllers-596dccccb4-mz7rs eth0 calico-kube-controllers [] [] [kns.calico-system ksa.calico-system.calico-kube-controllers] calie9e7c66eb36 [] [] }} ContainerID="52bbae128000a8bfdcaed52ca6c2966917aa011b14f98074c98cfe30664e71d5" Namespace="calico-system" Pod="calico-kube-controllers-596dccccb4-mz7rs" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-calico--kube--controllers--596dccccb4--mz7rs-" Jan 20 23:53:40.511472 containerd[1670]: 2026-01-20 23:53:40.190 [INFO][4429] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="52bbae128000a8bfdcaed52ca6c2966917aa011b14f98074c98cfe30664e71d5" Namespace="calico-system" Pod="calico-kube-controllers-596dccccb4-mz7rs" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-calico--kube--controllers--596dccccb4--mz7rs-eth0" Jan 20 23:53:40.511472 containerd[1670]: 2026-01-20 23:53:40.234 [INFO][4487] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="52bbae128000a8bfdcaed52ca6c2966917aa011b14f98074c98cfe30664e71d5" HandleID="k8s-pod-network.52bbae128000a8bfdcaed52ca6c2966917aa011b14f98074c98cfe30664e71d5" Workload="ci--4547--0--0--n--e5b472a427-k8s-calico--kube--controllers--596dccccb4--mz7rs-eth0" Jan 20 23:53:40.511472 containerd[1670]: 2026-01-20 23:53:40.234 [INFO][4487] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="52bbae128000a8bfdcaed52ca6c2966917aa011b14f98074c98cfe30664e71d5" HandleID="k8s-pod-network.52bbae128000a8bfdcaed52ca6c2966917aa011b14f98074c98cfe30664e71d5" Workload="ci--4547--0--0--n--e5b472a427-k8s-calico--kube--controllers--596dccccb4--mz7rs-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400012ee50), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-n-e5b472a427", "pod":"calico-kube-controllers-596dccccb4-mz7rs", "timestamp":"2026-01-20 23:53:40.234244703 +0000 UTC"}, Hostname:"ci-4547-0-0-n-e5b472a427", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 23:53:40.511472 containerd[1670]: 2026-01-20 23:53:40.234 [INFO][4487] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 23:53:40.511472 containerd[1670]: 2026-01-20 23:53:40.392 [INFO][4487] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 23:53:40.511472 containerd[1670]: 2026-01-20 23:53:40.392 [INFO][4487] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-e5b472a427' Jan 20 23:53:40.511472 containerd[1670]: 2026-01-20 23:53:40.446 [INFO][4487] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.52bbae128000a8bfdcaed52ca6c2966917aa011b14f98074c98cfe30664e71d5" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.511472 containerd[1670]: 2026-01-20 23:53:40.453 [INFO][4487] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.511472 containerd[1670]: 2026-01-20 23:53:40.466 [INFO][4487] ipam/ipam.go 511: Trying affinity for 192.168.16.192/26 host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.511472 containerd[1670]: 2026-01-20 23:53:40.468 [INFO][4487] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.192/26 host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.511472 containerd[1670]: 2026-01-20 23:53:40.471 [INFO][4487] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.511472 containerd[1670]: 2026-01-20 23:53:40.471 [INFO][4487] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.52bbae128000a8bfdcaed52ca6c2966917aa011b14f98074c98cfe30664e71d5" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.511472 containerd[1670]: 2026-01-20 23:53:40.473 [INFO][4487] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.52bbae128000a8bfdcaed52ca6c2966917aa011b14f98074c98cfe30664e71d5 Jan 20 23:53:40.511472 containerd[1670]: 2026-01-20 23:53:40.481 [INFO][4487] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.52bbae128000a8bfdcaed52ca6c2966917aa011b14f98074c98cfe30664e71d5" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.511472 containerd[1670]: 2026-01-20 23:53:40.487 [INFO][4487] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.16.196/26] block=192.168.16.192/26 handle="k8s-pod-network.52bbae128000a8bfdcaed52ca6c2966917aa011b14f98074c98cfe30664e71d5" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.511472 containerd[1670]: 2026-01-20 23:53:40.487 [INFO][4487] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.196/26] handle="k8s-pod-network.52bbae128000a8bfdcaed52ca6c2966917aa011b14f98074c98cfe30664e71d5" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.511472 containerd[1670]: 2026-01-20 23:53:40.487 [INFO][4487] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 23:53:40.511472 containerd[1670]: 2026-01-20 23:53:40.487 [INFO][4487] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.16.196/26] IPv6=[] ContainerID="52bbae128000a8bfdcaed52ca6c2966917aa011b14f98074c98cfe30664e71d5" HandleID="k8s-pod-network.52bbae128000a8bfdcaed52ca6c2966917aa011b14f98074c98cfe30664e71d5" Workload="ci--4547--0--0--n--e5b472a427-k8s-calico--kube--controllers--596dccccb4--mz7rs-eth0" Jan 20 23:53:40.512476 containerd[1670]: 2026-01-20 23:53:40.490 [INFO][4429] cni-plugin/k8s.go 418: Populated endpoint ContainerID="52bbae128000a8bfdcaed52ca6c2966917aa011b14f98074c98cfe30664e71d5" Namespace="calico-system" Pod="calico-kube-controllers-596dccccb4-mz7rs" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-calico--kube--controllers--596dccccb4--mz7rs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--e5b472a427-k8s-calico--kube--controllers--596dccccb4--mz7rs-eth0", GenerateName:"calico-kube-controllers-596dccccb4-", Namespace:"calico-system", SelfLink:"", UID:"a2e09f63-e3b2-438b-a6ce-d2eefff60f3e", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 23, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"596dccccb4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-e5b472a427", ContainerID:"", Pod:"calico-kube-controllers-596dccccb4-mz7rs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie9e7c66eb36", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 23:53:40.512476 containerd[1670]: 2026-01-20 23:53:40.490 [INFO][4429] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.196/32] ContainerID="52bbae128000a8bfdcaed52ca6c2966917aa011b14f98074c98cfe30664e71d5" Namespace="calico-system" Pod="calico-kube-controllers-596dccccb4-mz7rs" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-calico--kube--controllers--596dccccb4--mz7rs-eth0" Jan 20 23:53:40.512476 containerd[1670]: 2026-01-20 23:53:40.490 [INFO][4429] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie9e7c66eb36 ContainerID="52bbae128000a8bfdcaed52ca6c2966917aa011b14f98074c98cfe30664e71d5" Namespace="calico-system" Pod="calico-kube-controllers-596dccccb4-mz7rs" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-calico--kube--controllers--596dccccb4--mz7rs-eth0" Jan 20 23:53:40.512476 containerd[1670]: 2026-01-20 23:53:40.494 [INFO][4429] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="52bbae128000a8bfdcaed52ca6c2966917aa011b14f98074c98cfe30664e71d5" Namespace="calico-system" Pod="calico-kube-controllers-596dccccb4-mz7rs" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-calico--kube--controllers--596dccccb4--mz7rs-eth0" Jan 20 23:53:40.512476 containerd[1670]: 2026-01-20 23:53:40.494 [INFO][4429] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="52bbae128000a8bfdcaed52ca6c2966917aa011b14f98074c98cfe30664e71d5" Namespace="calico-system" Pod="calico-kube-controllers-596dccccb4-mz7rs" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-calico--kube--controllers--596dccccb4--mz7rs-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--e5b472a427-k8s-calico--kube--controllers--596dccccb4--mz7rs-eth0", GenerateName:"calico-kube-controllers-596dccccb4-", Namespace:"calico-system", SelfLink:"", UID:"a2e09f63-e3b2-438b-a6ce-d2eefff60f3e", ResourceVersion:"838", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 23, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"calico-kube-controllers", "k8s-app":"calico-kube-controllers", "pod-template-hash":"596dccccb4", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-kube-controllers"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-e5b472a427", ContainerID:"52bbae128000a8bfdcaed52ca6c2966917aa011b14f98074c98cfe30664e71d5", Pod:"calico-kube-controllers-596dccccb4-mz7rs", Endpoint:"eth0", ServiceAccountName:"calico-kube-controllers", IPNetworks:[]string{"192.168.16.196/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.calico-kube-controllers"}, InterfaceName:"calie9e7c66eb36", MAC:"8e:78:bc:ed:7c:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 23:53:40.512476 containerd[1670]: 2026-01-20 23:53:40.508 [INFO][4429] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="52bbae128000a8bfdcaed52ca6c2966917aa011b14f98074c98cfe30664e71d5" Namespace="calico-system" Pod="calico-kube-controllers-596dccccb4-mz7rs" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-calico--kube--controllers--596dccccb4--mz7rs-eth0" Jan 20 23:53:40.514684 containerd[1670]: time="2026-01-20T23:53:40.514651304Z" level=info msg="CreateContainer within sandbox \"c34de7ffff12b4d25b81478f592d12eed18e7be8502158d685b81332098f08a3\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 23:53:40.532643 containerd[1670]: time="2026-01-20T23:53:40.532591716Z" level=info msg="Container ae89ffc3ef4b65a774c3eee6b2e21f2c4620598395b5ffcad16c1dcf20a60a55: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:53:40.536652 containerd[1670]: time="2026-01-20T23:53:40.536601007Z" level=info msg="connecting to shim 52bbae128000a8bfdcaed52ca6c2966917aa011b14f98074c98cfe30664e71d5" address="unix:///run/containerd/s/f645220e1930dbf2d1a3c8742accf58c1c5743b7e345a8fc35ee0dd6c5e37b7d" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:53:40.541526 containerd[1670]: time="2026-01-20T23:53:40.541486621Z" level=info msg="CreateContainer within sandbox \"c34de7ffff12b4d25b81478f592d12eed18e7be8502158d685b81332098f08a3\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"ae89ffc3ef4b65a774c3eee6b2e21f2c4620598395b5ffcad16c1dcf20a60a55\"" Jan 20 23:53:40.542146 containerd[1670]: time="2026-01-20T23:53:40.541938382Z" level=info msg="StartContainer for \"ae89ffc3ef4b65a774c3eee6b2e21f2c4620598395b5ffcad16c1dcf20a60a55\"" Jan 20 23:53:40.542761 containerd[1670]: time="2026-01-20T23:53:40.542732385Z" level=info msg="connecting to shim ae89ffc3ef4b65a774c3eee6b2e21f2c4620598395b5ffcad16c1dcf20a60a55" address="unix:///run/containerd/s/9af5422c963223e4e849d5f691da450382d8f64c2375fc8505ab288177a831be" protocol=ttrpc version=3 Jan 20 23:53:40.571700 systemd[1]: Started cri-containerd-52bbae128000a8bfdcaed52ca6c2966917aa011b14f98074c98cfe30664e71d5.scope - libcontainer container 52bbae128000a8bfdcaed52ca6c2966917aa011b14f98074c98cfe30664e71d5. Jan 20 23:53:40.575258 systemd[1]: Started cri-containerd-ae89ffc3ef4b65a774c3eee6b2e21f2c4620598395b5ffcad16c1dcf20a60a55.scope - libcontainer container ae89ffc3ef4b65a774c3eee6b2e21f2c4620598395b5ffcad16c1dcf20a60a55. Jan 20 23:53:40.585000 audit: BPF prog-id=190 op=LOAD Jan 20 23:53:40.586000 audit: BPF prog-id=191 op=LOAD Jan 20 23:53:40.586000 audit[4658]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4646 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.586000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532626261653132383030306138626664636165643532636136633239 Jan 20 23:53:40.586000 audit: BPF prog-id=191 op=UNLOAD Jan 20 23:53:40.586000 audit[4658]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4646 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.586000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532626261653132383030306138626664636165643532636136633239 Jan 20 23:53:40.586000 audit: BPF prog-id=192 op=LOAD Jan 20 23:53:40.586000 audit[4658]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4646 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.586000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532626261653132383030306138626664636165643532636136633239 Jan 20 23:53:40.586000 audit: BPF prog-id=193 op=LOAD Jan 20 23:53:40.586000 audit[4658]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4646 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.586000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532626261653132383030306138626664636165643532636136633239 Jan 20 23:53:40.586000 audit: BPF prog-id=193 op=UNLOAD Jan 20 23:53:40.586000 audit[4658]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4646 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.586000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532626261653132383030306138626664636165643532636136633239 Jan 20 23:53:40.586000 audit: BPF prog-id=192 op=UNLOAD Jan 20 23:53:40.586000 audit[4658]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4646 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.586000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532626261653132383030306138626664636165643532636136633239 Jan 20 23:53:40.586000 audit: BPF prog-id=194 op=LOAD Jan 20 23:53:40.586000 audit[4658]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4646 pid=4658 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.586000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3532626261653132383030306138626664636165643532636136633239 Jan 20 23:53:40.589000 audit: BPF prog-id=195 op=LOAD Jan 20 23:53:40.593000 audit: BPF prog-id=196 op=LOAD Jan 20 23:53:40.593000 audit[4659]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0180 a2=98 a3=0 items=0 ppid=4587 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.593000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165383966666333656634623635613737346333656565366232653231 Jan 20 23:53:40.593000 audit: BPF prog-id=196 op=UNLOAD Jan 20 23:53:40.593000 audit[4659]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4587 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.593000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165383966666333656634623635613737346333656565366232653231 Jan 20 23:53:40.593000 audit: BPF prog-id=197 op=LOAD Jan 20 23:53:40.593000 audit[4659]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a03e8 a2=98 a3=0 items=0 ppid=4587 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.593000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165383966666333656634623635613737346333656565366232653231 Jan 20 23:53:40.593000 audit: BPF prog-id=198 op=LOAD Jan 20 23:53:40.593000 audit[4659]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001a0168 a2=98 a3=0 items=0 ppid=4587 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.593000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165383966666333656634623635613737346333656565366232653231 Jan 20 23:53:40.593000 audit: BPF prog-id=198 op=UNLOAD Jan 20 23:53:40.593000 audit[4659]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4587 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.593000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165383966666333656634623635613737346333656565366232653231 Jan 20 23:53:40.593000 audit: BPF prog-id=197 op=UNLOAD Jan 20 23:53:40.593000 audit[4659]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4587 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.593000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165383966666333656634623635613737346333656565366232653231 Jan 20 23:53:40.593000 audit: BPF prog-id=199 op=LOAD Jan 20 23:53:40.593000 audit[4659]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001a0648 a2=98 a3=0 items=0 ppid=4587 pid=4659 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.593000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6165383966666333656634623635613737346333656565366232653231 Jan 20 23:53:40.601004 systemd-networkd[1586]: cali93125803e83: Link UP Jan 20 23:53:40.603052 systemd-networkd[1586]: cali93125803e83: Gained carrier Jan 20 23:53:40.626047 containerd[1670]: time="2026-01-20T23:53:40.625930822Z" level=info msg="StartContainer for \"ae89ffc3ef4b65a774c3eee6b2e21f2c4620598395b5ffcad16c1dcf20a60a55\" returns successfully" Jan 20 23:53:40.626047 containerd[1670]: time="2026-01-20T23:53:40.626011463Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-kube-controllers-596dccccb4-mz7rs,Uid:a2e09f63-e3b2-438b-a6ce-d2eefff60f3e,Namespace:calico-system,Attempt:0,} returns sandbox id \"52bbae128000a8bfdcaed52ca6c2966917aa011b14f98074c98cfe30664e71d5\"" Jan 20 23:53:40.626908 containerd[1670]: 2026-01-20 23:53:40.188 [INFO][4440] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 23:53:40.626908 containerd[1670]: 2026-01-20 23:53:40.211 [INFO][4440] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--e5b472a427-k8s-coredns--674b8bbfcf--tzdxw-eth0 coredns-674b8bbfcf- kube-system 9ba157f6-cf6d-4c61-afb6-496bc0f9fb0e 847 0 2026-01-20 23:53:05 +0000 UTC map[k8s-app:kube-dns pod-template-hash:674b8bbfcf projectcalico.org/namespace:kube-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:coredns] map[] [] [] []} {k8s ci-4547-0-0-n-e5b472a427 coredns-674b8bbfcf-tzdxw eth0 coredns [] [] [kns.kube-system ksa.kube-system.coredns] cali93125803e83 [{dns UDP 53 0 } {dns-tcp TCP 53 0 } {metrics TCP 9153 0 }] [] }} ContainerID="b6a0c3b6a3752028a8d7dea45ed9d1d13d3cd8cbe15c8b6991d1684758fafa7a" Namespace="kube-system" Pod="coredns-674b8bbfcf-tzdxw" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-coredns--674b8bbfcf--tzdxw-" Jan 20 23:53:40.626908 containerd[1670]: 2026-01-20 23:53:40.211 [INFO][4440] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="b6a0c3b6a3752028a8d7dea45ed9d1d13d3cd8cbe15c8b6991d1684758fafa7a" Namespace="kube-system" Pod="coredns-674b8bbfcf-tzdxw" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-coredns--674b8bbfcf--tzdxw-eth0" Jan 20 23:53:40.626908 containerd[1670]: 2026-01-20 23:53:40.250 [INFO][4508] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="b6a0c3b6a3752028a8d7dea45ed9d1d13d3cd8cbe15c8b6991d1684758fafa7a" HandleID="k8s-pod-network.b6a0c3b6a3752028a8d7dea45ed9d1d13d3cd8cbe15c8b6991d1684758fafa7a" Workload="ci--4547--0--0--n--e5b472a427-k8s-coredns--674b8bbfcf--tzdxw-eth0" Jan 20 23:53:40.626908 containerd[1670]: 2026-01-20 23:53:40.250 [INFO][4508] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="b6a0c3b6a3752028a8d7dea45ed9d1d13d3cd8cbe15c8b6991d1684758fafa7a" HandleID="k8s-pod-network.b6a0c3b6a3752028a8d7dea45ed9d1d13d3cd8cbe15c8b6991d1684758fafa7a" Workload="ci--4547--0--0--n--e5b472a427-k8s-coredns--674b8bbfcf--tzdxw-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000323390), Attrs:map[string]string{"namespace":"kube-system", "node":"ci-4547-0-0-n-e5b472a427", "pod":"coredns-674b8bbfcf-tzdxw", "timestamp":"2026-01-20 23:53:40.250360869 +0000 UTC"}, Hostname:"ci-4547-0-0-n-e5b472a427", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 23:53:40.626908 containerd[1670]: 2026-01-20 23:53:40.250 [INFO][4508] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 23:53:40.626908 containerd[1670]: 2026-01-20 23:53:40.487 [INFO][4508] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 23:53:40.626908 containerd[1670]: 2026-01-20 23:53:40.488 [INFO][4508] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-e5b472a427' Jan 20 23:53:40.626908 containerd[1670]: 2026-01-20 23:53:40.547 [INFO][4508] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.b6a0c3b6a3752028a8d7dea45ed9d1d13d3cd8cbe15c8b6991d1684758fafa7a" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.626908 containerd[1670]: 2026-01-20 23:53:40.555 [INFO][4508] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.626908 containerd[1670]: 2026-01-20 23:53:40.565 [INFO][4508] ipam/ipam.go 511: Trying affinity for 192.168.16.192/26 host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.626908 containerd[1670]: 2026-01-20 23:53:40.568 [INFO][4508] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.192/26 host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.626908 containerd[1670]: 2026-01-20 23:53:40.570 [INFO][4508] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.626908 containerd[1670]: 2026-01-20 23:53:40.570 [INFO][4508] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.b6a0c3b6a3752028a8d7dea45ed9d1d13d3cd8cbe15c8b6991d1684758fafa7a" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.626908 containerd[1670]: 2026-01-20 23:53:40.572 [INFO][4508] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.b6a0c3b6a3752028a8d7dea45ed9d1d13d3cd8cbe15c8b6991d1684758fafa7a Jan 20 23:53:40.626908 containerd[1670]: 2026-01-20 23:53:40.581 [INFO][4508] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.b6a0c3b6a3752028a8d7dea45ed9d1d13d3cd8cbe15c8b6991d1684758fafa7a" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.626908 containerd[1670]: 2026-01-20 23:53:40.589 [INFO][4508] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.16.197/26] block=192.168.16.192/26 handle="k8s-pod-network.b6a0c3b6a3752028a8d7dea45ed9d1d13d3cd8cbe15c8b6991d1684758fafa7a" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.626908 containerd[1670]: 2026-01-20 23:53:40.589 [INFO][4508] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.197/26] handle="k8s-pod-network.b6a0c3b6a3752028a8d7dea45ed9d1d13d3cd8cbe15c8b6991d1684758fafa7a" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:40.626908 containerd[1670]: 2026-01-20 23:53:40.589 [INFO][4508] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 23:53:40.626908 containerd[1670]: 2026-01-20 23:53:40.589 [INFO][4508] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.16.197/26] IPv6=[] ContainerID="b6a0c3b6a3752028a8d7dea45ed9d1d13d3cd8cbe15c8b6991d1684758fafa7a" HandleID="k8s-pod-network.b6a0c3b6a3752028a8d7dea45ed9d1d13d3cd8cbe15c8b6991d1684758fafa7a" Workload="ci--4547--0--0--n--e5b472a427-k8s-coredns--674b8bbfcf--tzdxw-eth0" Jan 20 23:53:40.627486 containerd[1670]: 2026-01-20 23:53:40.593 [INFO][4440] cni-plugin/k8s.go 418: Populated endpoint ContainerID="b6a0c3b6a3752028a8d7dea45ed9d1d13d3cd8cbe15c8b6991d1684758fafa7a" Namespace="kube-system" Pod="coredns-674b8bbfcf-tzdxw" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-coredns--674b8bbfcf--tzdxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--e5b472a427-k8s-coredns--674b8bbfcf--tzdxw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9ba157f6-cf6d-4c61-afb6-496bc0f9fb0e", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 23, 53, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-e5b472a427", ContainerID:"", Pod:"coredns-674b8bbfcf-tzdxw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali93125803e83", MAC:"", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 23:53:40.627486 containerd[1670]: 2026-01-20 23:53:40.593 [INFO][4440] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.197/32] ContainerID="b6a0c3b6a3752028a8d7dea45ed9d1d13d3cd8cbe15c8b6991d1684758fafa7a" Namespace="kube-system" Pod="coredns-674b8bbfcf-tzdxw" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-coredns--674b8bbfcf--tzdxw-eth0" Jan 20 23:53:40.627486 containerd[1670]: 2026-01-20 23:53:40.593 [INFO][4440] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali93125803e83 ContainerID="b6a0c3b6a3752028a8d7dea45ed9d1d13d3cd8cbe15c8b6991d1684758fafa7a" Namespace="kube-system" Pod="coredns-674b8bbfcf-tzdxw" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-coredns--674b8bbfcf--tzdxw-eth0" Jan 20 23:53:40.627486 containerd[1670]: 2026-01-20 23:53:40.603 [INFO][4440] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="b6a0c3b6a3752028a8d7dea45ed9d1d13d3cd8cbe15c8b6991d1684758fafa7a" Namespace="kube-system" Pod="coredns-674b8bbfcf-tzdxw" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-coredns--674b8bbfcf--tzdxw-eth0" Jan 20 23:53:40.627486 containerd[1670]: 2026-01-20 23:53:40.606 [INFO][4440] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="b6a0c3b6a3752028a8d7dea45ed9d1d13d3cd8cbe15c8b6991d1684758fafa7a" Namespace="kube-system" Pod="coredns-674b8bbfcf-tzdxw" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-coredns--674b8bbfcf--tzdxw-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--e5b472a427-k8s-coredns--674b8bbfcf--tzdxw-eth0", GenerateName:"coredns-674b8bbfcf-", Namespace:"kube-system", SelfLink:"", UID:"9ba157f6-cf6d-4c61-afb6-496bc0f9fb0e", ResourceVersion:"847", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 23, 53, 5, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"k8s-app":"kube-dns", "pod-template-hash":"674b8bbfcf", "projectcalico.org/namespace":"kube-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"coredns"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-e5b472a427", ContainerID:"b6a0c3b6a3752028a8d7dea45ed9d1d13d3cd8cbe15c8b6991d1684758fafa7a", Pod:"coredns-674b8bbfcf-tzdxw", Endpoint:"eth0", ServiceAccountName:"coredns", IPNetworks:[]string{"192.168.16.197/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.kube-system", "ksa.kube-system.coredns"}, InterfaceName:"cali93125803e83", MAC:"e6:45:56:1a:39:2f", Ports:[]v3.WorkloadEndpointPort{v3.WorkloadEndpointPort{Name:"dns", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"UDP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"dns-tcp", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x35, HostPort:0x0, HostIP:""}, v3.WorkloadEndpointPort{Name:"metrics", Protocol:numorstring.Protocol{Type:1, NumVal:0x0, StrVal:"TCP"}, Port:0x23c1, HostPort:0x0, HostIP:""}}, AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 23:53:40.627486 containerd[1670]: 2026-01-20 23:53:40.621 [INFO][4440] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="b6a0c3b6a3752028a8d7dea45ed9d1d13d3cd8cbe15c8b6991d1684758fafa7a" Namespace="kube-system" Pod="coredns-674b8bbfcf-tzdxw" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-coredns--674b8bbfcf--tzdxw-eth0" Jan 20 23:53:40.653707 containerd[1670]: time="2026-01-20T23:53:40.653510541Z" level=info msg="connecting to shim b6a0c3b6a3752028a8d7dea45ed9d1d13d3cd8cbe15c8b6991d1684758fafa7a" address="unix:///run/containerd/s/ec97655d5d785582b8ca236b211eb5023ab66d1dca76d3bb8946f2708f582983" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:53:40.678749 systemd[1]: Started cri-containerd-b6a0c3b6a3752028a8d7dea45ed9d1d13d3cd8cbe15c8b6991d1684758fafa7a.scope - libcontainer container b6a0c3b6a3752028a8d7dea45ed9d1d13d3cd8cbe15c8b6991d1684758fafa7a. Jan 20 23:53:40.689000 audit: BPF prog-id=200 op=LOAD Jan 20 23:53:40.690000 audit: BPF prog-id=201 op=LOAD Jan 20 23:53:40.690000 audit[4739]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4727 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.690000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236613063336236613337353230323861386437646561343565643964 Jan 20 23:53:40.690000 audit: BPF prog-id=201 op=UNLOAD Jan 20 23:53:40.690000 audit[4739]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4727 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.690000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236613063336236613337353230323861386437646561343565643964 Jan 20 23:53:40.690000 audit: BPF prog-id=202 op=LOAD Jan 20 23:53:40.690000 audit[4739]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4727 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.690000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236613063336236613337353230323861386437646561343565643964 Jan 20 23:53:40.690000 audit: BPF prog-id=203 op=LOAD Jan 20 23:53:40.690000 audit[4739]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4727 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.690000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236613063336236613337353230323861386437646561343565643964 Jan 20 23:53:40.690000 audit: BPF prog-id=203 op=UNLOAD Jan 20 23:53:40.690000 audit[4739]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4727 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.690000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236613063336236613337353230323861386437646561343565643964 Jan 20 23:53:40.690000 audit: BPF prog-id=202 op=UNLOAD Jan 20 23:53:40.690000 audit[4739]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4727 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.690000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236613063336236613337353230323861386437646561343565643964 Jan 20 23:53:40.690000 audit: BPF prog-id=204 op=LOAD Jan 20 23:53:40.690000 audit[4739]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4727 pid=4739 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.690000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6236613063336236613337353230323861386437646561343565643964 Jan 20 23:53:40.719378 containerd[1670]: time="2026-01-20T23:53:40.719336209Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:coredns-674b8bbfcf-tzdxw,Uid:9ba157f6-cf6d-4c61-afb6-496bc0f9fb0e,Namespace:kube-system,Attempt:0,} returns sandbox id \"b6a0c3b6a3752028a8d7dea45ed9d1d13d3cd8cbe15c8b6991d1684758fafa7a\"" Jan 20 23:53:40.720591 containerd[1670]: time="2026-01-20T23:53:40.720531013Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:53:40.723209 containerd[1670]: time="2026-01-20T23:53:40.723145180Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 23:53:40.723310 containerd[1670]: time="2026-01-20T23:53:40.723205420Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 23:53:40.723648 kubelet[2940]: E0120 23:53:40.723555 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:53:40.724230 kubelet[2940]: E0120 23:53:40.723975 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:53:40.724336 kubelet[2940]: E0120 23:53:40.724288 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n26q6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-767c66c85d-nfdlb_calico-apiserver(0c4c02eb-8f55-425f-9205-33b16803b19e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 23:53:40.724876 containerd[1670]: time="2026-01-20T23:53:40.724529424Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 23:53:40.725629 kubelet[2940]: E0120 23:53:40.725586 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-767c66c85d-nfdlb" podUID="0c4c02eb-8f55-425f-9205-33b16803b19e" Jan 20 23:53:40.725889 containerd[1670]: time="2026-01-20T23:53:40.725833228Z" level=info msg="CreateContainer within sandbox \"b6a0c3b6a3752028a8d7dea45ed9d1d13d3cd8cbe15c8b6991d1684758fafa7a\" for container &ContainerMetadata{Name:coredns,Attempt:0,}" Jan 20 23:53:40.739410 containerd[1670]: time="2026-01-20T23:53:40.738976706Z" level=info msg="Container 7ff1f419b801094d6c53fe276af56f2fa1eedcddf8685021e9b2e29254f3d1a3: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:53:40.747283 containerd[1670]: time="2026-01-20T23:53:40.747218849Z" level=info msg="CreateContainer within sandbox \"b6a0c3b6a3752028a8d7dea45ed9d1d13d3cd8cbe15c8b6991d1684758fafa7a\" for &ContainerMetadata{Name:coredns,Attempt:0,} returns container id \"7ff1f419b801094d6c53fe276af56f2fa1eedcddf8685021e9b2e29254f3d1a3\"" Jan 20 23:53:40.747786 containerd[1670]: time="2026-01-20T23:53:40.747743171Z" level=info msg="StartContainer for \"7ff1f419b801094d6c53fe276af56f2fa1eedcddf8685021e9b2e29254f3d1a3\"" Jan 20 23:53:40.748754 containerd[1670]: time="2026-01-20T23:53:40.748711373Z" level=info msg="connecting to shim 7ff1f419b801094d6c53fe276af56f2fa1eedcddf8685021e9b2e29254f3d1a3" address="unix:///run/containerd/s/ec97655d5d785582b8ca236b211eb5023ab66d1dca76d3bb8946f2708f582983" protocol=ttrpc version=3 Jan 20 23:53:40.779696 systemd[1]: Started cri-containerd-7ff1f419b801094d6c53fe276af56f2fa1eedcddf8685021e9b2e29254f3d1a3.scope - libcontainer container 7ff1f419b801094d6c53fe276af56f2fa1eedcddf8685021e9b2e29254f3d1a3. Jan 20 23:53:40.789000 audit: BPF prog-id=205 op=LOAD Jan 20 23:53:40.790000 audit: BPF prog-id=206 op=LOAD Jan 20 23:53:40.790000 audit[4767]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4727 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766663166343139623830313039346436633533666532373661663536 Jan 20 23:53:40.790000 audit: BPF prog-id=206 op=UNLOAD Jan 20 23:53:40.790000 audit[4767]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4727 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766663166343139623830313039346436633533666532373661663536 Jan 20 23:53:40.790000 audit: BPF prog-id=207 op=LOAD Jan 20 23:53:40.790000 audit[4767]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4727 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766663166343139623830313039346436633533666532373661663536 Jan 20 23:53:40.790000 audit: BPF prog-id=208 op=LOAD Jan 20 23:53:40.790000 audit[4767]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4727 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766663166343139623830313039346436633533666532373661663536 Jan 20 23:53:40.790000 audit: BPF prog-id=208 op=UNLOAD Jan 20 23:53:40.790000 audit[4767]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4727 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766663166343139623830313039346436633533666532373661663536 Jan 20 23:53:40.790000 audit: BPF prog-id=207 op=UNLOAD Jan 20 23:53:40.790000 audit[4767]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4727 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766663166343139623830313039346436633533666532373661663536 Jan 20 23:53:40.790000 audit: BPF prog-id=209 op=LOAD Jan 20 23:53:40.790000 audit[4767]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4727 pid=4767 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:40.790000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3766663166343139623830313039346436633533666532373661663536 Jan 20 23:53:40.809769 containerd[1670]: time="2026-01-20T23:53:40.809730788Z" level=info msg="StartContainer for \"7ff1f419b801094d6c53fe276af56f2fa1eedcddf8685021e9b2e29254f3d1a3\" returns successfully" Jan 20 23:53:41.062269 containerd[1670]: time="2026-01-20T23:53:41.062050429Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:53:41.063262 containerd[1670]: time="2026-01-20T23:53:41.063225993Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 23:53:41.063444 containerd[1670]: time="2026-01-20T23:53:41.063273473Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 23:53:41.063629 kubelet[2940]: E0120 23:53:41.063587 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 23:53:41.063675 kubelet[2940]: E0120 23:53:41.063643 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 23:53:41.063861 kubelet[2940]: E0120 23:53:41.063796 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7dzwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-596dccccb4-mz7rs_calico-system(a2e09f63-e3b2-438b-a6ce-d2eefff60f3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 23:53:41.065556 kubelet[2940]: E0120 23:53:41.065516 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-596dccccb4-mz7rs" podUID="a2e09f63-e3b2-438b-a6ce-d2eefff60f3e" Jan 20 23:53:41.126021 containerd[1670]: time="2026-01-20T23:53:41.125941732Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f8rjq,Uid:d9863653-2d98-4479-88c2-8614b7871a32,Namespace:calico-system,Attempt:0,}" Jan 20 23:53:41.250634 systemd-networkd[1586]: cali9753ac3b4e9: Link UP Jan 20 23:53:41.251175 systemd-networkd[1586]: cali9753ac3b4e9: Gained carrier Jan 20 23:53:41.270869 containerd[1670]: 2026-01-20 23:53:41.151 [INFO][4823] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 23:53:41.270869 containerd[1670]: 2026-01-20 23:53:41.165 [INFO][4823] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--e5b472a427-k8s-csi--node--driver--f8rjq-eth0 csi-node-driver- calico-system d9863653-2d98-4479-88c2-8614b7871a32 757 0 2026-01-20 23:53:22 +0000 UTC map[app.kubernetes.io/name:csi-node-driver controller-revision-hash:857b56db8f k8s-app:csi-node-driver name:csi-node-driver pod-template-generation:1 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:csi-node-driver] map[] [] [] []} {k8s ci-4547-0-0-n-e5b472a427 csi-node-driver-f8rjq eth0 csi-node-driver [] [] [kns.calico-system ksa.calico-system.csi-node-driver] cali9753ac3b4e9 [] [] }} ContainerID="1af9029ef253878d552d012ff8357567e28c979b71e3de563ed109327e39b1e5" Namespace="calico-system" Pod="csi-node-driver-f8rjq" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-csi--node--driver--f8rjq-" Jan 20 23:53:41.270869 containerd[1670]: 2026-01-20 23:53:41.165 [INFO][4823] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="1af9029ef253878d552d012ff8357567e28c979b71e3de563ed109327e39b1e5" Namespace="calico-system" Pod="csi-node-driver-f8rjq" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-csi--node--driver--f8rjq-eth0" Jan 20 23:53:41.270869 containerd[1670]: 2026-01-20 23:53:41.201 [INFO][4837] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="1af9029ef253878d552d012ff8357567e28c979b71e3de563ed109327e39b1e5" HandleID="k8s-pod-network.1af9029ef253878d552d012ff8357567e28c979b71e3de563ed109327e39b1e5" Workload="ci--4547--0--0--n--e5b472a427-k8s-csi--node--driver--f8rjq-eth0" Jan 20 23:53:41.270869 containerd[1670]: 2026-01-20 23:53:41.201 [INFO][4837] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="1af9029ef253878d552d012ff8357567e28c979b71e3de563ed109327e39b1e5" HandleID="k8s-pod-network.1af9029ef253878d552d012ff8357567e28c979b71e3de563ed109327e39b1e5" Workload="ci--4547--0--0--n--e5b472a427-k8s-csi--node--driver--f8rjq-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x4000137590), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-n-e5b472a427", "pod":"csi-node-driver-f8rjq", "timestamp":"2026-01-20 23:53:41.201708548 +0000 UTC"}, Hostname:"ci-4547-0-0-n-e5b472a427", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 23:53:41.270869 containerd[1670]: 2026-01-20 23:53:41.202 [INFO][4837] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 23:53:41.270869 containerd[1670]: 2026-01-20 23:53:41.202 [INFO][4837] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 23:53:41.270869 containerd[1670]: 2026-01-20 23:53:41.202 [INFO][4837] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-e5b472a427' Jan 20 23:53:41.270869 containerd[1670]: 2026-01-20 23:53:41.214 [INFO][4837] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.1af9029ef253878d552d012ff8357567e28c979b71e3de563ed109327e39b1e5" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:41.270869 containerd[1670]: 2026-01-20 23:53:41.220 [INFO][4837] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:41.270869 containerd[1670]: 2026-01-20 23:53:41.225 [INFO][4837] ipam/ipam.go 511: Trying affinity for 192.168.16.192/26 host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:41.270869 containerd[1670]: 2026-01-20 23:53:41.227 [INFO][4837] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.192/26 host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:41.270869 containerd[1670]: 2026-01-20 23:53:41.230 [INFO][4837] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:41.270869 containerd[1670]: 2026-01-20 23:53:41.230 [INFO][4837] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.1af9029ef253878d552d012ff8357567e28c979b71e3de563ed109327e39b1e5" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:41.270869 containerd[1670]: 2026-01-20 23:53:41.232 [INFO][4837] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.1af9029ef253878d552d012ff8357567e28c979b71e3de563ed109327e39b1e5 Jan 20 23:53:41.270869 containerd[1670]: 2026-01-20 23:53:41.239 [INFO][4837] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.1af9029ef253878d552d012ff8357567e28c979b71e3de563ed109327e39b1e5" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:41.270869 containerd[1670]: 2026-01-20 23:53:41.246 [INFO][4837] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.16.198/26] block=192.168.16.192/26 handle="k8s-pod-network.1af9029ef253878d552d012ff8357567e28c979b71e3de563ed109327e39b1e5" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:41.270869 containerd[1670]: 2026-01-20 23:53:41.246 [INFO][4837] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.198/26] handle="k8s-pod-network.1af9029ef253878d552d012ff8357567e28c979b71e3de563ed109327e39b1e5" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:41.270869 containerd[1670]: 2026-01-20 23:53:41.246 [INFO][4837] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 23:53:41.270869 containerd[1670]: 2026-01-20 23:53:41.246 [INFO][4837] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.16.198/26] IPv6=[] ContainerID="1af9029ef253878d552d012ff8357567e28c979b71e3de563ed109327e39b1e5" HandleID="k8s-pod-network.1af9029ef253878d552d012ff8357567e28c979b71e3de563ed109327e39b1e5" Workload="ci--4547--0--0--n--e5b472a427-k8s-csi--node--driver--f8rjq-eth0" Jan 20 23:53:41.274667 containerd[1670]: 2026-01-20 23:53:41.248 [INFO][4823] cni-plugin/k8s.go 418: Populated endpoint ContainerID="1af9029ef253878d552d012ff8357567e28c979b71e3de563ed109327e39b1e5" Namespace="calico-system" Pod="csi-node-driver-f8rjq" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-csi--node--driver--f8rjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--e5b472a427-k8s-csi--node--driver--f8rjq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d9863653-2d98-4479-88c2-8614b7871a32", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 23, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-e5b472a427", ContainerID:"", Pod:"csi-node-driver-f8rjq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.16.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9753ac3b4e9", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 23:53:41.274667 containerd[1670]: 2026-01-20 23:53:41.249 [INFO][4823] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.198/32] ContainerID="1af9029ef253878d552d012ff8357567e28c979b71e3de563ed109327e39b1e5" Namespace="calico-system" Pod="csi-node-driver-f8rjq" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-csi--node--driver--f8rjq-eth0" Jan 20 23:53:41.274667 containerd[1670]: 2026-01-20 23:53:41.249 [INFO][4823] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali9753ac3b4e9 ContainerID="1af9029ef253878d552d012ff8357567e28c979b71e3de563ed109327e39b1e5" Namespace="calico-system" Pod="csi-node-driver-f8rjq" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-csi--node--driver--f8rjq-eth0" Jan 20 23:53:41.274667 containerd[1670]: 2026-01-20 23:53:41.251 [INFO][4823] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="1af9029ef253878d552d012ff8357567e28c979b71e3de563ed109327e39b1e5" Namespace="calico-system" Pod="csi-node-driver-f8rjq" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-csi--node--driver--f8rjq-eth0" Jan 20 23:53:41.274667 containerd[1670]: 2026-01-20 23:53:41.251 [INFO][4823] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="1af9029ef253878d552d012ff8357567e28c979b71e3de563ed109327e39b1e5" Namespace="calico-system" Pod="csi-node-driver-f8rjq" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-csi--node--driver--f8rjq-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--e5b472a427-k8s-csi--node--driver--f8rjq-eth0", GenerateName:"csi-node-driver-", Namespace:"calico-system", SelfLink:"", UID:"d9863653-2d98-4479-88c2-8614b7871a32", ResourceVersion:"757", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 23, 53, 22, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"csi-node-driver", "controller-revision-hash":"857b56db8f", "k8s-app":"csi-node-driver", "name":"csi-node-driver", "pod-template-generation":"1", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"csi-node-driver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-e5b472a427", ContainerID:"1af9029ef253878d552d012ff8357567e28c979b71e3de563ed109327e39b1e5", Pod:"csi-node-driver-f8rjq", Endpoint:"eth0", ServiceAccountName:"csi-node-driver", IPNetworks:[]string{"192.168.16.198/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.csi-node-driver"}, InterfaceName:"cali9753ac3b4e9", MAC:"8e:f2:65:e0:90:f1", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 23:53:41.274667 containerd[1670]: 2026-01-20 23:53:41.263 [INFO][4823] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="1af9029ef253878d552d012ff8357567e28c979b71e3de563ed109327e39b1e5" Namespace="calico-system" Pod="csi-node-driver-f8rjq" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-csi--node--driver--f8rjq-eth0" Jan 20 23:53:41.277569 kubelet[2940]: E0120 23:53:41.277523 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-596dccccb4-mz7rs" podUID="a2e09f63-e3b2-438b-a6ce-d2eefff60f3e" Jan 20 23:53:41.285340 kubelet[2940]: E0120 23:53:41.285283 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-767c66c85d-nfdlb" podUID="0c4c02eb-8f55-425f-9205-33b16803b19e" Jan 20 23:53:41.311187 containerd[1670]: time="2026-01-20T23:53:41.311023101Z" level=info msg="connecting to shim 1af9029ef253878d552d012ff8357567e28c979b71e3de563ed109327e39b1e5" address="unix:///run/containerd/s/17311e1408fdf06fd8387cab5ab0bb8619aaacc6bc90b2997c06a64e79e6f0d4" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:53:41.330000 audit[4870]: NETFILTER_CFG table=filter:121 family=2 entries=22 op=nft_register_rule pid=4870 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:41.330000 audit[4870]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffd33f7bc0 a2=0 a3=1 items=0 ppid=3104 pid=4870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:41.330000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:41.335000 audit[4870]: NETFILTER_CFG table=nat:122 family=2 entries=12 op=nft_register_rule pid=4870 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:41.335000 audit[4870]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffd33f7bc0 a2=0 a3=1 items=0 ppid=3104 pid=4870 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:41.335000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:41.340084 kubelet[2940]: I0120 23:53:41.340006 2940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-t9hcb" podStartSLOduration=36.339972384 podStartE2EDuration="36.339972384s" podCreationTimestamp="2026-01-20 23:53:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 23:53:41.324194539 +0000 UTC m=+41.298313594" watchObservedRunningTime="2026-01-20 23:53:41.339972384 +0000 UTC m=+41.314091439" Jan 20 23:53:41.340273 kubelet[2940]: I0120 23:53:41.340169 2940 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="kube-system/coredns-674b8bbfcf-tzdxw" podStartSLOduration=36.340163384 podStartE2EDuration="36.340163384s" podCreationTimestamp="2026-01-20 23:53:05 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-20 23:53:41.338848421 +0000 UTC m=+41.312967476" watchObservedRunningTime="2026-01-20 23:53:41.340163384 +0000 UTC m=+41.314282439" Jan 20 23:53:41.353000 audit[4886]: NETFILTER_CFG table=filter:123 family=2 entries=22 op=nft_register_rule pid=4886 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:41.353000 audit[4886]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8224 a0=3 a1=ffffdb7e9a90 a2=0 a3=1 items=0 ppid=3104 pid=4886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:41.353000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:41.361709 systemd[1]: Started cri-containerd-1af9029ef253878d552d012ff8357567e28c979b71e3de563ed109327e39b1e5.scope - libcontainer container 1af9029ef253878d552d012ff8357567e28c979b71e3de563ed109327e39b1e5. Jan 20 23:53:41.362000 audit[4886]: NETFILTER_CFG table=nat:124 family=2 entries=12 op=nft_register_rule pid=4886 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:41.362000 audit[4886]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=2700 a0=3 a1=ffffdb7e9a90 a2=0 a3=1 items=0 ppid=3104 pid=4886 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:41.362000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:41.377000 audit: BPF prog-id=210 op=LOAD Jan 20 23:53:41.379000 audit: BPF prog-id=211 op=LOAD Jan 20 23:53:41.379000 audit[4873]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4860 pid=4873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:41.379000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161663930323965663235333837386435353264303132666638333537 Jan 20 23:53:41.379000 audit: BPF prog-id=211 op=UNLOAD Jan 20 23:53:41.379000 audit[4873]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4860 pid=4873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:41.379000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161663930323965663235333837386435353264303132666638333537 Jan 20 23:53:41.380000 audit: BPF prog-id=212 op=LOAD Jan 20 23:53:41.380000 audit[4873]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4860 pid=4873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:41.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161663930323965663235333837386435353264303132666638333537 Jan 20 23:53:41.380000 audit: BPF prog-id=213 op=LOAD Jan 20 23:53:41.380000 audit[4873]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4860 pid=4873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:41.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161663930323965663235333837386435353264303132666638333537 Jan 20 23:53:41.380000 audit: BPF prog-id=213 op=UNLOAD Jan 20 23:53:41.380000 audit[4873]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4860 pid=4873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:41.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161663930323965663235333837386435353264303132666638333537 Jan 20 23:53:41.380000 audit: BPF prog-id=212 op=UNLOAD Jan 20 23:53:41.380000 audit[4873]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4860 pid=4873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:41.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161663930323965663235333837386435353264303132666638333537 Jan 20 23:53:41.380000 audit: BPF prog-id=214 op=LOAD Jan 20 23:53:41.380000 audit[4873]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4860 pid=4873 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:41.380000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3161663930323965663235333837386435353264303132666638333537 Jan 20 23:53:41.407578 containerd[1670]: time="2026-01-20T23:53:41.407541337Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:csi-node-driver-f8rjq,Uid:d9863653-2d98-4479-88c2-8614b7871a32,Namespace:calico-system,Attempt:0,} returns sandbox id \"1af9029ef253878d552d012ff8357567e28c979b71e3de563ed109327e39b1e5\"" Jan 20 23:53:41.410328 containerd[1670]: time="2026-01-20T23:53:41.409881344Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 23:53:41.526720 systemd-networkd[1586]: cali990ded0a686: Gained IPv6LL Jan 20 23:53:41.741344 containerd[1670]: time="2026-01-20T23:53:41.741050450Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:53:41.743825 containerd[1670]: time="2026-01-20T23:53:41.743703098Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 23:53:41.743825 containerd[1670]: time="2026-01-20T23:53:41.743719498Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 23:53:41.744036 kubelet[2940]: E0120 23:53:41.743960 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 23:53:41.744036 kubelet[2940]: E0120 23:53:41.744013 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 23:53:41.744438 kubelet[2940]: E0120 23:53:41.744133 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-td2cf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-f8rjq_calico-system(d9863653-2d98-4479-88c2-8614b7871a32): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 23:53:41.746502 containerd[1670]: time="2026-01-20T23:53:41.746445626Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 23:53:41.910876 systemd-networkd[1586]: cali078793afb89: Gained IPv6LL Jan 20 23:53:42.093446 containerd[1670]: time="2026-01-20T23:53:42.093154217Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:53:42.094840 containerd[1670]: time="2026-01-20T23:53:42.094774862Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 23:53:42.095234 containerd[1670]: time="2026-01-20T23:53:42.094829982Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 23:53:42.095276 kubelet[2940]: E0120 23:53:42.095189 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 23:53:42.095276 kubelet[2940]: E0120 23:53:42.095236 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 23:53:42.095428 kubelet[2940]: E0120 23:53:42.095379 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-td2cf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-f8rjq_calico-system(d9863653-2d98-4479-88c2-8614b7871a32): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 23:53:42.096746 kubelet[2940]: E0120 23:53:42.096686 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f8rjq" podUID="d9863653-2d98-4479-88c2-8614b7871a32" Jan 20 23:53:42.127923 containerd[1670]: time="2026-01-20T23:53:42.127869196Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-767c66c85d-czqtd,Uid:ba92b217-c758-44c6-b97e-3beb84feb1eb,Namespace:calico-apiserver,Attempt:0,}" Jan 20 23:53:42.238193 systemd-networkd[1586]: cali3b0f7f287a3: Link UP Jan 20 23:53:42.238726 systemd-networkd[1586]: cali3b0f7f287a3: Gained carrier Jan 20 23:53:42.254007 containerd[1670]: 2026-01-20 23:53:42.155 [INFO][4927] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 23:53:42.254007 containerd[1670]: 2026-01-20 23:53:42.170 [INFO][4927] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--e5b472a427-k8s-calico--apiserver--767c66c85d--czqtd-eth0 calico-apiserver-767c66c85d- calico-apiserver ba92b217-c758-44c6-b97e-3beb84feb1eb 844 0 2026-01-20 23:53:15 +0000 UTC map[apiserver:true app.kubernetes.io/name:calico-apiserver k8s-app:calico-apiserver pod-template-hash:767c66c85d projectcalico.org/namespace:calico-apiserver projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:calico-apiserver] map[] [] [] []} {k8s ci-4547-0-0-n-e5b472a427 calico-apiserver-767c66c85d-czqtd eth0 calico-apiserver [] [] [kns.calico-apiserver ksa.calico-apiserver.calico-apiserver] cali3b0f7f287a3 [] [] }} ContainerID="7901dc0c9949e4a642d4eaf3f55e3a693ef7ecc86647fad0707505dc2aeb365c" Namespace="calico-apiserver" Pod="calico-apiserver-767c66c85d-czqtd" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-calico--apiserver--767c66c85d--czqtd-" Jan 20 23:53:42.254007 containerd[1670]: 2026-01-20 23:53:42.170 [INFO][4927] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="7901dc0c9949e4a642d4eaf3f55e3a693ef7ecc86647fad0707505dc2aeb365c" Namespace="calico-apiserver" Pod="calico-apiserver-767c66c85d-czqtd" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-calico--apiserver--767c66c85d--czqtd-eth0" Jan 20 23:53:42.254007 containerd[1670]: 2026-01-20 23:53:42.191 [INFO][4942] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="7901dc0c9949e4a642d4eaf3f55e3a693ef7ecc86647fad0707505dc2aeb365c" HandleID="k8s-pod-network.7901dc0c9949e4a642d4eaf3f55e3a693ef7ecc86647fad0707505dc2aeb365c" Workload="ci--4547--0--0--n--e5b472a427-k8s-calico--apiserver--767c66c85d--czqtd-eth0" Jan 20 23:53:42.254007 containerd[1670]: 2026-01-20 23:53:42.192 [INFO][4942] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="7901dc0c9949e4a642d4eaf3f55e3a693ef7ecc86647fad0707505dc2aeb365c" HandleID="k8s-pod-network.7901dc0c9949e4a642d4eaf3f55e3a693ef7ecc86647fad0707505dc2aeb365c" Workload="ci--4547--0--0--n--e5b472a427-k8s-calico--apiserver--767c66c85d--czqtd-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x400004db20), Attrs:map[string]string{"namespace":"calico-apiserver", "node":"ci-4547-0-0-n-e5b472a427", "pod":"calico-apiserver-767c66c85d-czqtd", "timestamp":"2026-01-20 23:53:42.191838179 +0000 UTC"}, Hostname:"ci-4547-0-0-n-e5b472a427", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 23:53:42.254007 containerd[1670]: 2026-01-20 23:53:42.193 [INFO][4942] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 23:53:42.254007 containerd[1670]: 2026-01-20 23:53:42.193 [INFO][4942] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 23:53:42.254007 containerd[1670]: 2026-01-20 23:53:42.193 [INFO][4942] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-e5b472a427' Jan 20 23:53:42.254007 containerd[1670]: 2026-01-20 23:53:42.203 [INFO][4942] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.7901dc0c9949e4a642d4eaf3f55e3a693ef7ecc86647fad0707505dc2aeb365c" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:42.254007 containerd[1670]: 2026-01-20 23:53:42.209 [INFO][4942] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:42.254007 containerd[1670]: 2026-01-20 23:53:42.215 [INFO][4942] ipam/ipam.go 511: Trying affinity for 192.168.16.192/26 host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:42.254007 containerd[1670]: 2026-01-20 23:53:42.218 [INFO][4942] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.192/26 host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:42.254007 containerd[1670]: 2026-01-20 23:53:42.220 [INFO][4942] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:42.254007 containerd[1670]: 2026-01-20 23:53:42.220 [INFO][4942] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.7901dc0c9949e4a642d4eaf3f55e3a693ef7ecc86647fad0707505dc2aeb365c" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:42.254007 containerd[1670]: 2026-01-20 23:53:42.222 [INFO][4942] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.7901dc0c9949e4a642d4eaf3f55e3a693ef7ecc86647fad0707505dc2aeb365c Jan 20 23:53:42.254007 containerd[1670]: 2026-01-20 23:53:42.226 [INFO][4942] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.7901dc0c9949e4a642d4eaf3f55e3a693ef7ecc86647fad0707505dc2aeb365c" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:42.254007 containerd[1670]: 2026-01-20 23:53:42.233 [INFO][4942] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.16.199/26] block=192.168.16.192/26 handle="k8s-pod-network.7901dc0c9949e4a642d4eaf3f55e3a693ef7ecc86647fad0707505dc2aeb365c" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:42.254007 containerd[1670]: 2026-01-20 23:53:42.233 [INFO][4942] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.199/26] handle="k8s-pod-network.7901dc0c9949e4a642d4eaf3f55e3a693ef7ecc86647fad0707505dc2aeb365c" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:42.254007 containerd[1670]: 2026-01-20 23:53:42.233 [INFO][4942] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 23:53:42.254007 containerd[1670]: 2026-01-20 23:53:42.233 [INFO][4942] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.16.199/26] IPv6=[] ContainerID="7901dc0c9949e4a642d4eaf3f55e3a693ef7ecc86647fad0707505dc2aeb365c" HandleID="k8s-pod-network.7901dc0c9949e4a642d4eaf3f55e3a693ef7ecc86647fad0707505dc2aeb365c" Workload="ci--4547--0--0--n--e5b472a427-k8s-calico--apiserver--767c66c85d--czqtd-eth0" Jan 20 23:53:42.254630 containerd[1670]: 2026-01-20 23:53:42.235 [INFO][4927] cni-plugin/k8s.go 418: Populated endpoint ContainerID="7901dc0c9949e4a642d4eaf3f55e3a693ef7ecc86647fad0707505dc2aeb365c" Namespace="calico-apiserver" Pod="calico-apiserver-767c66c85d-czqtd" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-calico--apiserver--767c66c85d--czqtd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--e5b472a427-k8s-calico--apiserver--767c66c85d--czqtd-eth0", GenerateName:"calico-apiserver-767c66c85d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ba92b217-c758-44c6-b97e-3beb84feb1eb", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 23, 53, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"767c66c85d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-e5b472a427", ContainerID:"", Pod:"calico-apiserver-767c66c85d-czqtd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3b0f7f287a3", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 23:53:42.254630 containerd[1670]: 2026-01-20 23:53:42.236 [INFO][4927] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.199/32] ContainerID="7901dc0c9949e4a642d4eaf3f55e3a693ef7ecc86647fad0707505dc2aeb365c" Namespace="calico-apiserver" Pod="calico-apiserver-767c66c85d-czqtd" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-calico--apiserver--767c66c85d--czqtd-eth0" Jan 20 23:53:42.254630 containerd[1670]: 2026-01-20 23:53:42.236 [INFO][4927] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to cali3b0f7f287a3 ContainerID="7901dc0c9949e4a642d4eaf3f55e3a693ef7ecc86647fad0707505dc2aeb365c" Namespace="calico-apiserver" Pod="calico-apiserver-767c66c85d-czqtd" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-calico--apiserver--767c66c85d--czqtd-eth0" Jan 20 23:53:42.254630 containerd[1670]: 2026-01-20 23:53:42.239 [INFO][4927] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="7901dc0c9949e4a642d4eaf3f55e3a693ef7ecc86647fad0707505dc2aeb365c" Namespace="calico-apiserver" Pod="calico-apiserver-767c66c85d-czqtd" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-calico--apiserver--767c66c85d--czqtd-eth0" Jan 20 23:53:42.254630 containerd[1670]: 2026-01-20 23:53:42.241 [INFO][4927] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="7901dc0c9949e4a642d4eaf3f55e3a693ef7ecc86647fad0707505dc2aeb365c" Namespace="calico-apiserver" Pod="calico-apiserver-767c66c85d-czqtd" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-calico--apiserver--767c66c85d--czqtd-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--e5b472a427-k8s-calico--apiserver--767c66c85d--czqtd-eth0", GenerateName:"calico-apiserver-767c66c85d-", Namespace:"calico-apiserver", SelfLink:"", UID:"ba92b217-c758-44c6-b97e-3beb84feb1eb", ResourceVersion:"844", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 23, 53, 15, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"apiserver":"true", "app.kubernetes.io/name":"calico-apiserver", "k8s-app":"calico-apiserver", "pod-template-hash":"767c66c85d", "projectcalico.org/namespace":"calico-apiserver", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"calico-apiserver"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-e5b472a427", ContainerID:"7901dc0c9949e4a642d4eaf3f55e3a693ef7ecc86647fad0707505dc2aeb365c", Pod:"calico-apiserver-767c66c85d-czqtd", Endpoint:"eth0", ServiceAccountName:"calico-apiserver", IPNetworks:[]string{"192.168.16.199/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-apiserver", "ksa.calico-apiserver.calico-apiserver"}, InterfaceName:"cali3b0f7f287a3", MAC:"72:53:e5:ac:c6:a7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 23:53:42.254630 containerd[1670]: 2026-01-20 23:53:42.250 [INFO][4927] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="7901dc0c9949e4a642d4eaf3f55e3a693ef7ecc86647fad0707505dc2aeb365c" Namespace="calico-apiserver" Pod="calico-apiserver-767c66c85d-czqtd" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-calico--apiserver--767c66c85d--czqtd-eth0" Jan 20 23:53:42.276121 containerd[1670]: time="2026-01-20T23:53:42.275614579Z" level=info msg="connecting to shim 7901dc0c9949e4a642d4eaf3f55e3a693ef7ecc86647fad0707505dc2aeb365c" address="unix:///run/containerd/s/bddb8a02a3c461800d18e4b887dbcedc085242b4de8df42836a4be620c70cf8d" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:53:42.288099 kubelet[2940]: E0120 23:53:42.288044 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-767c66c85d-nfdlb" podUID="0c4c02eb-8f55-425f-9205-33b16803b19e" Jan 20 23:53:42.289223 kubelet[2940]: E0120 23:53:42.288531 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-596dccccb4-mz7rs" podUID="a2e09f63-e3b2-438b-a6ce-d2eefff60f3e" Jan 20 23:53:42.292853 kubelet[2940]: E0120 23:53:42.292654 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f8rjq" podUID="d9863653-2d98-4479-88c2-8614b7871a32" Jan 20 23:53:42.295617 systemd-networkd[1586]: calie9e7c66eb36: Gained IPv6LL Jan 20 23:53:42.322697 systemd[1]: Started cri-containerd-7901dc0c9949e4a642d4eaf3f55e3a693ef7ecc86647fad0707505dc2aeb365c.scope - libcontainer container 7901dc0c9949e4a642d4eaf3f55e3a693ef7ecc86647fad0707505dc2aeb365c. Jan 20 23:53:42.338000 audit: BPF prog-id=215 op=LOAD Jan 20 23:53:42.339000 audit: BPF prog-id=216 op=LOAD Jan 20 23:53:42.339000 audit[4976]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=4964 pid=4976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:42.339000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739303164633063393934396534613634326434656166336635356533 Jan 20 23:53:42.339000 audit: BPF prog-id=216 op=UNLOAD Jan 20 23:53:42.339000 audit[4976]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4964 pid=4976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:42.339000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739303164633063393934396534613634326434656166336635356533 Jan 20 23:53:42.339000 audit: BPF prog-id=217 op=LOAD Jan 20 23:53:42.339000 audit[4976]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=4964 pid=4976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:42.339000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739303164633063393934396534613634326434656166336635356533 Jan 20 23:53:42.339000 audit: BPF prog-id=218 op=LOAD Jan 20 23:53:42.339000 audit[4976]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=4964 pid=4976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:42.339000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739303164633063393934396534613634326434656166336635356533 Jan 20 23:53:42.339000 audit: BPF prog-id=218 op=UNLOAD Jan 20 23:53:42.339000 audit[4976]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=4964 pid=4976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:42.339000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739303164633063393934396534613634326434656166336635356533 Jan 20 23:53:42.339000 audit: BPF prog-id=217 op=UNLOAD Jan 20 23:53:42.339000 audit[4976]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=4964 pid=4976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:42.339000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739303164633063393934396534613634326434656166336635356533 Jan 20 23:53:42.339000 audit: BPF prog-id=219 op=LOAD Jan 20 23:53:42.339000 audit[4976]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=4964 pid=4976 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:42.339000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3739303164633063393934396534613634326434656166336635356533 Jan 20 23:53:42.371226 containerd[1670]: time="2026-01-20T23:53:42.371103492Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:calico-apiserver-767c66c85d-czqtd,Uid:ba92b217-c758-44c6-b97e-3beb84feb1eb,Namespace:calico-apiserver,Attempt:0,} returns sandbox id \"7901dc0c9949e4a642d4eaf3f55e3a693ef7ecc86647fad0707505dc2aeb365c\"" Jan 20 23:53:42.373115 containerd[1670]: time="2026-01-20T23:53:42.373076497Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 23:53:42.377000 audit[5004]: NETFILTER_CFG table=filter:125 family=2 entries=19 op=nft_register_rule pid=5004 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:42.377000 audit[5004]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffd7e67750 a2=0 a3=1 items=0 ppid=3104 pid=5004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:42.377000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:42.391000 audit[5004]: NETFILTER_CFG table=nat:126 family=2 entries=45 op=nft_register_chain pid=5004 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:42.391000 audit[5004]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=19092 a0=3 a1=ffffd7e67750 a2=0 a3=1 items=0 ppid=3104 pid=5004 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:42.391000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:42.487439 systemd-networkd[1586]: cali9753ac3b4e9: Gained IPv6LL Jan 20 23:53:42.679198 systemd-networkd[1586]: cali93125803e83: Gained IPv6LL Jan 20 23:53:42.704259 containerd[1670]: time="2026-01-20T23:53:42.704196564Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:53:42.706346 containerd[1670]: time="2026-01-20T23:53:42.706304091Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 23:53:42.706346 containerd[1670]: time="2026-01-20T23:53:42.706369691Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 23:53:42.706716 kubelet[2940]: E0120 23:53:42.706616 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:53:42.706716 kubelet[2940]: E0120 23:53:42.706679 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:53:42.706932 kubelet[2940]: E0120 23:53:42.706857 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ggz5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-767c66c85d-czqtd_calico-apiserver(ba92b217-c758-44c6-b97e-3beb84feb1eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 23:53:42.708331 kubelet[2940]: E0120 23:53:42.708259 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-767c66c85d-czqtd" podUID="ba92b217-c758-44c6-b97e-3beb84feb1eb" Jan 20 23:53:43.126147 containerd[1670]: time="2026-01-20T23:53:43.125892253Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-27t6c,Uid:7524b8f6-4e20-4bc6-8860-ffd104203deb,Namespace:calico-system,Attempt:0,}" Jan 20 23:53:43.236581 systemd-networkd[1586]: calie6af68ef91c: Link UP Jan 20 23:53:43.236876 systemd-networkd[1586]: calie6af68ef91c: Gained carrier Jan 20 23:53:43.250165 containerd[1670]: 2026-01-20 23:53:43.151 [INFO][5032] cni-plugin/utils.go 100: File /var/lib/calico/mtu does not exist Jan 20 23:53:43.250165 containerd[1670]: 2026-01-20 23:53:43.169 [INFO][5032] cni-plugin/plugin.go 340: Calico CNI found existing endpoint: &{{WorkloadEndpoint projectcalico.org/v3} {ci--4547--0--0--n--e5b472a427-k8s-goldmane--666569f655--27t6c-eth0 goldmane-666569f655- calico-system 7524b8f6-4e20-4bc6-8860-ffd104203deb 845 0 2026-01-20 23:53:20 +0000 UTC map[app.kubernetes.io/name:goldmane k8s-app:goldmane pod-template-hash:666569f655 projectcalico.org/namespace:calico-system projectcalico.org/orchestrator:k8s projectcalico.org/serviceaccount:goldmane] map[] [] [] []} {k8s ci-4547-0-0-n-e5b472a427 goldmane-666569f655-27t6c eth0 goldmane [] [] [kns.calico-system ksa.calico-system.goldmane] calie6af68ef91c [] [] }} ContainerID="e4d691b32fed6119124ab2fda71ca577a6c24bf7eacbefbb282c193ae9016667" Namespace="calico-system" Pod="goldmane-666569f655-27t6c" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-goldmane--666569f655--27t6c-" Jan 20 23:53:43.250165 containerd[1670]: 2026-01-20 23:53:43.169 [INFO][5032] cni-plugin/k8s.go 74: Extracted identifiers for CmdAddK8s ContainerID="e4d691b32fed6119124ab2fda71ca577a6c24bf7eacbefbb282c193ae9016667" Namespace="calico-system" Pod="goldmane-666569f655-27t6c" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-goldmane--666569f655--27t6c-eth0" Jan 20 23:53:43.250165 containerd[1670]: 2026-01-20 23:53:43.191 [INFO][5046] ipam/ipam_plugin.go 227: Calico CNI IPAM request count IPv4=1 IPv6=0 ContainerID="e4d691b32fed6119124ab2fda71ca577a6c24bf7eacbefbb282c193ae9016667" HandleID="k8s-pod-network.e4d691b32fed6119124ab2fda71ca577a6c24bf7eacbefbb282c193ae9016667" Workload="ci--4547--0--0--n--e5b472a427-k8s-goldmane--666569f655--27t6c-eth0" Jan 20 23:53:43.250165 containerd[1670]: 2026-01-20 23:53:43.191 [INFO][5046] ipam/ipam_plugin.go 275: Auto assigning IP ContainerID="e4d691b32fed6119124ab2fda71ca577a6c24bf7eacbefbb282c193ae9016667" HandleID="k8s-pod-network.e4d691b32fed6119124ab2fda71ca577a6c24bf7eacbefbb282c193ae9016667" Workload="ci--4547--0--0--n--e5b472a427-k8s-goldmane--666569f655--27t6c-eth0" assignArgs=ipam.AutoAssignArgs{Num4:1, Num6:0, HandleID:(*string)(0x40002c37e0), Attrs:map[string]string{"namespace":"calico-system", "node":"ci-4547-0-0-n-e5b472a427", "pod":"goldmane-666569f655-27t6c", "timestamp":"2026-01-20 23:53:43.191593601 +0000 UTC"}, Hostname:"ci-4547-0-0-n-e5b472a427", IPv4Pools:[]net.IPNet{}, IPv6Pools:[]net.IPNet{}, MaxBlocksPerHost:0, HostReservedAttrIPv4s:(*ipam.HostReservedAttr)(nil), HostReservedAttrIPv6s:(*ipam.HostReservedAttr)(nil), IntendedUse:"Workload"} Jan 20 23:53:43.250165 containerd[1670]: 2026-01-20 23:53:43.191 [INFO][5046] ipam/ipam_plugin.go 377: About to acquire host-wide IPAM lock. Jan 20 23:53:43.250165 containerd[1670]: 2026-01-20 23:53:43.191 [INFO][5046] ipam/ipam_plugin.go 392: Acquired host-wide IPAM lock. Jan 20 23:53:43.250165 containerd[1670]: 2026-01-20 23:53:43.191 [INFO][5046] ipam/ipam.go 110: Auto-assign 1 ipv4, 0 ipv6 addrs for host 'ci-4547-0-0-n-e5b472a427' Jan 20 23:53:43.250165 containerd[1670]: 2026-01-20 23:53:43.202 [INFO][5046] ipam/ipam.go 691: Looking up existing affinities for host handle="k8s-pod-network.e4d691b32fed6119124ab2fda71ca577a6c24bf7eacbefbb282c193ae9016667" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:43.250165 containerd[1670]: 2026-01-20 23:53:43.208 [INFO][5046] ipam/ipam.go 394: Looking up existing affinities for host host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:43.250165 containerd[1670]: 2026-01-20 23:53:43.213 [INFO][5046] ipam/ipam.go 511: Trying affinity for 192.168.16.192/26 host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:43.250165 containerd[1670]: 2026-01-20 23:53:43.215 [INFO][5046] ipam/ipam.go 158: Attempting to load block cidr=192.168.16.192/26 host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:43.250165 containerd[1670]: 2026-01-20 23:53:43.217 [INFO][5046] ipam/ipam.go 235: Affinity is confirmed and block has been loaded cidr=192.168.16.192/26 host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:43.250165 containerd[1670]: 2026-01-20 23:53:43.217 [INFO][5046] ipam/ipam.go 1219: Attempting to assign 1 addresses from block block=192.168.16.192/26 handle="k8s-pod-network.e4d691b32fed6119124ab2fda71ca577a6c24bf7eacbefbb282c193ae9016667" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:43.250165 containerd[1670]: 2026-01-20 23:53:43.219 [INFO][5046] ipam/ipam.go 1780: Creating new handle: k8s-pod-network.e4d691b32fed6119124ab2fda71ca577a6c24bf7eacbefbb282c193ae9016667 Jan 20 23:53:43.250165 containerd[1670]: 2026-01-20 23:53:43.223 [INFO][5046] ipam/ipam.go 1246: Writing block in order to claim IPs block=192.168.16.192/26 handle="k8s-pod-network.e4d691b32fed6119124ab2fda71ca577a6c24bf7eacbefbb282c193ae9016667" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:43.250165 containerd[1670]: 2026-01-20 23:53:43.231 [INFO][5046] ipam/ipam.go 1262: Successfully claimed IPs: [192.168.16.200/26] block=192.168.16.192/26 handle="k8s-pod-network.e4d691b32fed6119124ab2fda71ca577a6c24bf7eacbefbb282c193ae9016667" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:43.250165 containerd[1670]: 2026-01-20 23:53:43.231 [INFO][5046] ipam/ipam.go 878: Auto-assigned 1 out of 1 IPv4s: [192.168.16.200/26] handle="k8s-pod-network.e4d691b32fed6119124ab2fda71ca577a6c24bf7eacbefbb282c193ae9016667" host="ci-4547-0-0-n-e5b472a427" Jan 20 23:53:43.250165 containerd[1670]: 2026-01-20 23:53:43.231 [INFO][5046] ipam/ipam_plugin.go 398: Released host-wide IPAM lock. Jan 20 23:53:43.250165 containerd[1670]: 2026-01-20 23:53:43.232 [INFO][5046] ipam/ipam_plugin.go 299: Calico CNI IPAM assigned addresses IPv4=[192.168.16.200/26] IPv6=[] ContainerID="e4d691b32fed6119124ab2fda71ca577a6c24bf7eacbefbb282c193ae9016667" HandleID="k8s-pod-network.e4d691b32fed6119124ab2fda71ca577a6c24bf7eacbefbb282c193ae9016667" Workload="ci--4547--0--0--n--e5b472a427-k8s-goldmane--666569f655--27t6c-eth0" Jan 20 23:53:43.251414 containerd[1670]: 2026-01-20 23:53:43.233 [INFO][5032] cni-plugin/k8s.go 418: Populated endpoint ContainerID="e4d691b32fed6119124ab2fda71ca577a6c24bf7eacbefbb282c193ae9016667" Namespace="calico-system" Pod="goldmane-666569f655-27t6c" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-goldmane--666569f655--27t6c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--e5b472a427-k8s-goldmane--666569f655--27t6c-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7524b8f6-4e20-4bc6-8860-ffd104203deb", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 23, 53, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-e5b472a427", ContainerID:"", Pod:"goldmane-666569f655-27t6c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.16.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie6af68ef91c", MAC:"", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 23:53:43.251414 containerd[1670]: 2026-01-20 23:53:43.233 [INFO][5032] cni-plugin/k8s.go 419: Calico CNI using IPs: [192.168.16.200/32] ContainerID="e4d691b32fed6119124ab2fda71ca577a6c24bf7eacbefbb282c193ae9016667" Namespace="calico-system" Pod="goldmane-666569f655-27t6c" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-goldmane--666569f655--27t6c-eth0" Jan 20 23:53:43.251414 containerd[1670]: 2026-01-20 23:53:43.233 [INFO][5032] cni-plugin/dataplane_linux.go 69: Setting the host side veth name to calie6af68ef91c ContainerID="e4d691b32fed6119124ab2fda71ca577a6c24bf7eacbefbb282c193ae9016667" Namespace="calico-system" Pod="goldmane-666569f655-27t6c" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-goldmane--666569f655--27t6c-eth0" Jan 20 23:53:43.251414 containerd[1670]: 2026-01-20 23:53:43.236 [INFO][5032] cni-plugin/dataplane_linux.go 508: Disabling IPv4 forwarding ContainerID="e4d691b32fed6119124ab2fda71ca577a6c24bf7eacbefbb282c193ae9016667" Namespace="calico-system" Pod="goldmane-666569f655-27t6c" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-goldmane--666569f655--27t6c-eth0" Jan 20 23:53:43.251414 containerd[1670]: 2026-01-20 23:53:43.237 [INFO][5032] cni-plugin/k8s.go 446: Added Mac, interface name, and active container ID to endpoint ContainerID="e4d691b32fed6119124ab2fda71ca577a6c24bf7eacbefbb282c193ae9016667" Namespace="calico-system" Pod="goldmane-666569f655-27t6c" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-goldmane--666569f655--27t6c-eth0" endpoint=&v3.WorkloadEndpoint{TypeMeta:v1.TypeMeta{Kind:"WorkloadEndpoint", APIVersion:"projectcalico.org/v3"}, ObjectMeta:v1.ObjectMeta{Name:"ci--4547--0--0--n--e5b472a427-k8s-goldmane--666569f655--27t6c-eth0", GenerateName:"goldmane-666569f655-", Namespace:"calico-system", SelfLink:"", UID:"7524b8f6-4e20-4bc6-8860-ffd104203deb", ResourceVersion:"845", Generation:0, CreationTimestamp:time.Date(2026, time.January, 20, 23, 53, 20, 0, time.Local), DeletionTimestamp:, DeletionGracePeriodSeconds:(*int64)(nil), Labels:map[string]string{"app.kubernetes.io/name":"goldmane", "k8s-app":"goldmane", "pod-template-hash":"666569f655", "projectcalico.org/namespace":"calico-system", "projectcalico.org/orchestrator":"k8s", "projectcalico.org/serviceaccount":"goldmane"}, Annotations:map[string]string(nil), OwnerReferences:[]v1.OwnerReference(nil), Finalizers:[]string(nil), ManagedFields:[]v1.ManagedFieldsEntry(nil)}, Spec:v3.WorkloadEndpointSpec{Orchestrator:"k8s", Workload:"", Node:"ci-4547-0-0-n-e5b472a427", ContainerID:"e4d691b32fed6119124ab2fda71ca577a6c24bf7eacbefbb282c193ae9016667", Pod:"goldmane-666569f655-27t6c", Endpoint:"eth0", ServiceAccountName:"goldmane", IPNetworks:[]string{"192.168.16.200/32"}, IPNATs:[]v3.IPNAT(nil), IPv4Gateway:"", IPv6Gateway:"", Profiles:[]string{"kns.calico-system", "ksa.calico-system.goldmane"}, InterfaceName:"calie6af68ef91c", MAC:"12:07:82:ff:27:d7", Ports:[]v3.WorkloadEndpointPort(nil), AllowSpoofedSourcePrefixes:[]string(nil), QoSControls:(*v3.QoSControls)(nil)}} Jan 20 23:53:43.251414 containerd[1670]: 2026-01-20 23:53:43.248 [INFO][5032] cni-plugin/k8s.go 532: Wrote updated endpoint to datastore ContainerID="e4d691b32fed6119124ab2fda71ca577a6c24bf7eacbefbb282c193ae9016667" Namespace="calico-system" Pod="goldmane-666569f655-27t6c" WorkloadEndpoint="ci--4547--0--0--n--e5b472a427-k8s-goldmane--666569f655--27t6c-eth0" Jan 20 23:53:43.270562 containerd[1670]: time="2026-01-20T23:53:43.270497548Z" level=info msg="connecting to shim e4d691b32fed6119124ab2fda71ca577a6c24bf7eacbefbb282c193ae9016667" address="unix:///run/containerd/s/87af117a5708e11582a85385451b7205500c96bf366b503563cb3f02d675a20c" namespace=k8s.io protocol=ttrpc version=3 Jan 20 23:53:43.290342 kubelet[2940]: E0120 23:53:43.290286 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-767c66c85d-czqtd" podUID="ba92b217-c758-44c6-b97e-3beb84feb1eb" Jan 20 23:53:43.292308 kubelet[2940]: E0120 23:53:43.292162 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f8rjq" podUID="d9863653-2d98-4479-88c2-8614b7871a32" Jan 20 23:53:43.294784 systemd[1]: Started cri-containerd-e4d691b32fed6119124ab2fda71ca577a6c24bf7eacbefbb282c193ae9016667.scope - libcontainer container e4d691b32fed6119124ab2fda71ca577a6c24bf7eacbefbb282c193ae9016667. Jan 20 23:53:43.310000 audit: BPF prog-id=220 op=LOAD Jan 20 23:53:43.311000 audit: BPF prog-id=221 op=LOAD Jan 20 23:53:43.311000 audit[5080]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000124180 a2=98 a3=0 items=0 ppid=5069 pid=5080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:43.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534643639316233326665643631313931323461623266646137316361 Jan 20 23:53:43.311000 audit: BPF prog-id=221 op=UNLOAD Jan 20 23:53:43.311000 audit[5080]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5069 pid=5080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:43.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534643639316233326665643631313931323461623266646137316361 Jan 20 23:53:43.311000 audit: BPF prog-id=222 op=LOAD Jan 20 23:53:43.311000 audit[5080]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001243e8 a2=98 a3=0 items=0 ppid=5069 pid=5080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:43.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534643639316233326665643631313931323461623266646137316361 Jan 20 23:53:43.311000 audit: BPF prog-id=223 op=LOAD Jan 20 23:53:43.311000 audit[5080]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000124168 a2=98 a3=0 items=0 ppid=5069 pid=5080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:43.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534643639316233326665643631313931323461623266646137316361 Jan 20 23:53:43.311000 audit: BPF prog-id=223 op=UNLOAD Jan 20 23:53:43.311000 audit[5080]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=5069 pid=5080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:43.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534643639316233326665643631313931323461623266646137316361 Jan 20 23:53:43.311000 audit: BPF prog-id=222 op=UNLOAD Jan 20 23:53:43.311000 audit[5080]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=5069 pid=5080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:43.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534643639316233326665643631313931323461623266646137316361 Jan 20 23:53:43.311000 audit: BPF prog-id=224 op=LOAD Jan 20 23:53:43.311000 audit[5080]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000124648 a2=98 a3=0 items=0 ppid=5069 pid=5080 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:43.311000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6534643639316233326665643631313931323461623266646137316361 Jan 20 23:53:43.354490 containerd[1670]: time="2026-01-20T23:53:43.353054664Z" level=info msg="RunPodSandbox for &PodSandboxMetadata{Name:goldmane-666569f655-27t6c,Uid:7524b8f6-4e20-4bc6-8860-ffd104203deb,Namespace:calico-system,Attempt:0,} returns sandbox id \"e4d691b32fed6119124ab2fda71ca577a6c24bf7eacbefbb282c193ae9016667\"" Jan 20 23:53:43.357472 containerd[1670]: time="2026-01-20T23:53:43.357412517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 23:53:43.371000 audit[5107]: NETFILTER_CFG table=filter:127 family=2 entries=16 op=nft_register_rule pid=5107 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:43.371000 audit[5107]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=ffffcf7f34b0 a2=0 a3=1 items=0 ppid=3104 pid=5107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:43.371000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:43.379000 audit[5107]: NETFILTER_CFG table=nat:128 family=2 entries=18 op=nft_register_rule pid=5107 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:43.379000 audit[5107]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5004 a0=3 a1=ffffcf7f34b0 a2=0 a3=1 items=0 ppid=3104 pid=5107 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:43.379000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:43.703386 containerd[1670]: time="2026-01-20T23:53:43.703176387Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:53:43.709114 containerd[1670]: time="2026-01-20T23:53:43.708724643Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 23:53:43.709361 containerd[1670]: time="2026-01-20T23:53:43.708794283Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 23:53:43.709519 kubelet[2940]: E0120 23:53:43.709482 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 23:53:43.709598 kubelet[2940]: E0120 23:53:43.709530 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 23:53:43.709885 kubelet[2940]: E0120 23:53:43.709665 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7rtl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-27t6c_calico-system(7524b8f6-4e20-4bc6-8860-ffd104203deb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 23:53:43.710907 kubelet[2940]: E0120 23:53:43.710855 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-27t6c" podUID="7524b8f6-4e20-4bc6-8860-ffd104203deb" Jan 20 23:53:43.894750 systemd-networkd[1586]: cali3b0f7f287a3: Gained IPv6LL Jan 20 23:53:44.293882 kubelet[2940]: E0120 23:53:44.293816 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-27t6c" podUID="7524b8f6-4e20-4bc6-8860-ffd104203deb" Jan 20 23:53:44.294274 kubelet[2940]: E0120 23:53:44.293320 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-767c66c85d-czqtd" podUID="ba92b217-c758-44c6-b97e-3beb84feb1eb" Jan 20 23:53:44.330000 audit[5133]: NETFILTER_CFG table=filter:129 family=2 entries=16 op=nft_register_rule pid=5133 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:44.330000 audit[5133]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5992 a0=3 a1=fffffb88a870 a2=0 a3=1 items=0 ppid=3104 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:44.330000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:44.346000 audit[5133]: NETFILTER_CFG table=nat:130 family=2 entries=18 op=nft_register_rule pid=5133 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:44.346000 audit[5133]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5004 a0=3 a1=fffffb88a870 a2=0 a3=1 items=0 ppid=3104 pid=5133 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:44.346000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:44.918629 systemd-networkd[1586]: calie6af68ef91c: Gained IPv6LL Jan 20 23:53:45.295342 kubelet[2940]: E0120 23:53:45.295269 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-27t6c" podUID="7524b8f6-4e20-4bc6-8860-ffd104203deb" Jan 20 23:53:45.376369 kubelet[2940]: I0120 23:53:45.376279 2940 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 20 23:53:45.405000 audit[5159]: NETFILTER_CFG table=filter:131 family=2 entries=15 op=nft_register_rule pid=5159 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:45.406982 kernel: kauditd_printk_skb: 218 callbacks suppressed Jan 20 23:53:45.407049 kernel: audit: type=1325 audit(1768953225.405:670): table=filter:131 family=2 entries=15 op=nft_register_rule pid=5159 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:45.405000 audit[5159]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd3e01bd0 a2=0 a3=1 items=0 ppid=3104 pid=5159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:45.412152 kernel: audit: type=1300 audit(1768953225.405:670): arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffd3e01bd0 a2=0 a3=1 items=0 ppid=3104 pid=5159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:45.405000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:45.413959 kernel: audit: type=1327 audit(1768953225.405:670): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:45.414000 audit[5159]: NETFILTER_CFG table=nat:132 family=2 entries=25 op=nft_register_chain pid=5159 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:45.414000 audit[5159]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8580 a0=3 a1=ffffd3e01bd0 a2=0 a3=1 items=0 ppid=3104 pid=5159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:45.420437 kernel: audit: type=1325 audit(1768953225.414:671): table=nat:132 family=2 entries=25 op=nft_register_chain pid=5159 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:53:45.420498 kernel: audit: type=1300 audit(1768953225.414:671): arch=c00000b7 syscall=211 success=yes exit=8580 a0=3 a1=ffffd3e01bd0 a2=0 a3=1 items=0 ppid=3104 pid=5159 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:45.420525 kernel: audit: type=1327 audit(1768953225.414:671): proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:45.414000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:53:45.880000 audit: BPF prog-id=225 op=LOAD Jan 20 23:53:45.880000 audit[5180]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff49aecd8 a2=98 a3=fffff49aecc8 items=0 ppid=5162 pid=5180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:45.885641 kernel: audit: type=1334 audit(1768953225.880:672): prog-id=225 op=LOAD Jan 20 23:53:45.885764 kernel: audit: type=1300 audit(1768953225.880:672): arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff49aecd8 a2=98 a3=fffff49aecc8 items=0 ppid=5162 pid=5180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:45.880000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 23:53:45.889886 kernel: audit: type=1327 audit(1768953225.880:672): proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 23:53:45.880000 audit: BPF prog-id=225 op=UNLOAD Jan 20 23:53:45.890940 kernel: audit: type=1334 audit(1768953225.880:673): prog-id=225 op=UNLOAD Jan 20 23:53:45.880000 audit[5180]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=fffff49aeca8 a3=0 items=0 ppid=5162 pid=5180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:45.880000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 23:53:45.881000 audit: BPF prog-id=226 op=LOAD Jan 20 23:53:45.881000 audit[5180]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff49aeb88 a2=74 a3=95 items=0 ppid=5162 pid=5180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:45.881000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 23:53:45.881000 audit: BPF prog-id=226 op=UNLOAD Jan 20 23:53:45.881000 audit[5180]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=5162 pid=5180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:45.881000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 23:53:45.881000 audit: BPF prog-id=227 op=LOAD Jan 20 23:53:45.881000 audit[5180]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=fffff49aebb8 a2=40 a3=fffff49aebe8 items=0 ppid=5162 pid=5180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:45.881000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 23:53:45.885000 audit: BPF prog-id=227 op=UNLOAD Jan 20 23:53:45.885000 audit[5180]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=fffff49aebe8 items=0 ppid=5162 pid=5180 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:45.885000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F74632F676C6F62616C732F63616C695F63746C625F70726F677300747970650070726F675F6172726179006B657900340076616C7565003400656E74726965730033006E616D650063616C695F63746C625F70726F677300666C6167730030 Jan 20 23:53:45.886000 audit: BPF prog-id=228 op=LOAD Jan 20 23:53:45.886000 audit[5181]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc9e41208 a2=98 a3=ffffc9e411f8 items=0 ppid=5162 pid=5181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:45.886000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:53:45.889000 audit: BPF prog-id=228 op=UNLOAD Jan 20 23:53:45.889000 audit[5181]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc9e411d8 a3=0 items=0 ppid=5162 pid=5181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:45.889000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:53:45.890000 audit: BPF prog-id=229 op=LOAD Jan 20 23:53:45.890000 audit[5181]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc9e40e98 a2=74 a3=95 items=0 ppid=5162 pid=5181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:45.890000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:53:45.890000 audit: BPF prog-id=229 op=UNLOAD Jan 20 23:53:45.890000 audit[5181]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=5162 pid=5181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:45.890000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:53:45.890000 audit: BPF prog-id=230 op=LOAD Jan 20 23:53:45.890000 audit[5181]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc9e40ef8 a2=94 a3=2 items=0 ppid=5162 pid=5181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:45.890000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:53:45.890000 audit: BPF prog-id=230 op=UNLOAD Jan 20 23:53:45.890000 audit[5181]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=5162 pid=5181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:45.890000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:53:45.996000 audit: BPF prog-id=231 op=LOAD Jan 20 23:53:45.996000 audit[5181]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffc9e40eb8 a2=40 a3=ffffc9e40ee8 items=0 ppid=5162 pid=5181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:45.996000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:53:45.996000 audit: BPF prog-id=231 op=UNLOAD Jan 20 23:53:45.996000 audit[5181]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffc9e40ee8 items=0 ppid=5162 pid=5181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:45.996000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:53:46.005000 audit: BPF prog-id=232 op=LOAD Jan 20 23:53:46.005000 audit[5181]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc9e40ec8 a2=94 a3=4 items=0 ppid=5162 pid=5181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.005000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:53:46.006000 audit: BPF prog-id=232 op=UNLOAD Jan 20 23:53:46.006000 audit[5181]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=5162 pid=5181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.006000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:53:46.006000 audit: BPF prog-id=233 op=LOAD Jan 20 23:53:46.006000 audit[5181]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc9e40d08 a2=94 a3=5 items=0 ppid=5162 pid=5181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.006000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:53:46.006000 audit: BPF prog-id=233 op=UNLOAD Jan 20 23:53:46.006000 audit[5181]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=5162 pid=5181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.006000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:53:46.006000 audit: BPF prog-id=234 op=LOAD Jan 20 23:53:46.006000 audit[5181]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc9e40f38 a2=94 a3=6 items=0 ppid=5162 pid=5181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.006000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:53:46.006000 audit: BPF prog-id=234 op=UNLOAD Jan 20 23:53:46.006000 audit[5181]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=5162 pid=5181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.006000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:53:46.006000 audit: BPF prog-id=235 op=LOAD Jan 20 23:53:46.006000 audit[5181]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffc9e40708 a2=94 a3=83 items=0 ppid=5162 pid=5181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.006000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:53:46.006000 audit: BPF prog-id=236 op=LOAD Jan 20 23:53:46.006000 audit[5181]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffc9e404c8 a2=94 a3=2 items=0 ppid=5162 pid=5181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.006000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:53:46.007000 audit: BPF prog-id=236 op=UNLOAD Jan 20 23:53:46.007000 audit[5181]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=5162 pid=5181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.007000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:53:46.007000 audit: BPF prog-id=235 op=UNLOAD Jan 20 23:53:46.007000 audit[5181]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=9e17620 a3=9e0ab00 items=0 ppid=5162 pid=5181 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.007000 audit: PROCTITLE proctitle=627066746F6F6C006D6170006C697374002D2D6A736F6E Jan 20 23:53:46.016000 audit: BPF prog-id=237 op=LOAD Jan 20 23:53:46.016000 audit[5184]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc93857b8 a2=98 a3=ffffc93857a8 items=0 ppid=5162 pid=5184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.016000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 23:53:46.016000 audit: BPF prog-id=237 op=UNLOAD Jan 20 23:53:46.016000 audit[5184]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc9385788 a3=0 items=0 ppid=5162 pid=5184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.016000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 23:53:46.016000 audit: BPF prog-id=238 op=LOAD Jan 20 23:53:46.016000 audit[5184]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc9385668 a2=74 a3=95 items=0 ppid=5162 pid=5184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.016000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 23:53:46.016000 audit: BPF prog-id=238 op=UNLOAD Jan 20 23:53:46.016000 audit[5184]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=5162 pid=5184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.016000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 23:53:46.016000 audit: BPF prog-id=239 op=LOAD Jan 20 23:53:46.016000 audit[5184]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc9385698 a2=40 a3=ffffc93856c8 items=0 ppid=5162 pid=5184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.016000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 23:53:46.016000 audit: BPF prog-id=239 op=UNLOAD Jan 20 23:53:46.016000 audit[5184]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=40 a3=ffffc93856c8 items=0 ppid=5162 pid=5184 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.016000 audit: PROCTITLE proctitle=627066746F6F6C006D617000637265617465002F7379732F66732F6270662F63616C69636F2F63616C69636F5F6661696C736166655F706F7274735F763100747970650068617368006B657900340076616C7565003100656E7472696573003635353335006E616D650063616C69636F5F6661696C736166655F706F7274735F Jan 20 23:53:46.072010 systemd-networkd[1586]: vxlan.calico: Link UP Jan 20 23:53:46.072018 systemd-networkd[1586]: vxlan.calico: Gained carrier Jan 20 23:53:46.103000 audit: BPF prog-id=240 op=LOAD Jan 20 23:53:46.103000 audit[5210]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc884f5d8 a2=98 a3=ffffc884f5c8 items=0 ppid=5162 pid=5210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.103000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 23:53:46.103000 audit: BPF prog-id=240 op=UNLOAD Jan 20 23:53:46.103000 audit[5210]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffc884f5a8 a3=0 items=0 ppid=5162 pid=5210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.103000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 23:53:46.103000 audit: BPF prog-id=241 op=LOAD Jan 20 23:53:46.103000 audit[5210]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc884f2b8 a2=74 a3=95 items=0 ppid=5162 pid=5210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.103000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 23:53:46.103000 audit: BPF prog-id=241 op=UNLOAD Jan 20 23:53:46.103000 audit[5210]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=74 a3=95 items=0 ppid=5162 pid=5210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.103000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 23:53:46.104000 audit: BPF prog-id=242 op=LOAD Jan 20 23:53:46.104000 audit[5210]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffc884f318 a2=94 a3=2 items=0 ppid=5162 pid=5210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.104000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 23:53:46.104000 audit: BPF prog-id=242 op=UNLOAD Jan 20 23:53:46.104000 audit[5210]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=70 a3=2 items=0 ppid=5162 pid=5210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.104000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 23:53:46.104000 audit: BPF prog-id=243 op=LOAD Jan 20 23:53:46.104000 audit[5210]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc884f198 a2=40 a3=ffffc884f1c8 items=0 ppid=5162 pid=5210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.104000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 23:53:46.104000 audit: BPF prog-id=243 op=UNLOAD Jan 20 23:53:46.104000 audit[5210]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=40 a3=ffffc884f1c8 items=0 ppid=5162 pid=5210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.104000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 23:53:46.104000 audit: BPF prog-id=244 op=LOAD Jan 20 23:53:46.104000 audit[5210]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc884f2e8 a2=94 a3=b7 items=0 ppid=5162 pid=5210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.104000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 23:53:46.104000 audit: BPF prog-id=244 op=UNLOAD Jan 20 23:53:46.104000 audit[5210]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=b7 items=0 ppid=5162 pid=5210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.104000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 23:53:46.104000 audit: BPF prog-id=245 op=LOAD Jan 20 23:53:46.104000 audit[5210]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc884e998 a2=94 a3=2 items=0 ppid=5162 pid=5210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.104000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 23:53:46.104000 audit: BPF prog-id=245 op=UNLOAD Jan 20 23:53:46.104000 audit[5210]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=2 items=0 ppid=5162 pid=5210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.104000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 23:53:46.104000 audit: BPF prog-id=246 op=LOAD Jan 20 23:53:46.104000 audit[5210]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffc884eb28 a2=94 a3=30 items=0 ppid=5162 pid=5210 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.104000 audit: PROCTITLE proctitle=627066746F6F6C0070726F67006C6F6164002F7573722F6C69622F63616C69636F2F6270662F66696C7465722E6F002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41007479706500786470 Jan 20 23:53:46.111000 audit: BPF prog-id=247 op=LOAD Jan 20 23:53:46.111000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=3 a0=5 a1=ffffe2851e18 a2=98 a3=ffffe2851e08 items=0 ppid=5162 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.111000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:53:46.111000 audit: BPF prog-id=247 op=UNLOAD Jan 20 23:53:46.111000 audit[5213]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=3 a1=57156c a2=ffffe2851de8 a3=0 items=0 ppid=5162 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.111000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:53:46.111000 audit: BPF prog-id=248 op=LOAD Jan 20 23:53:46.111000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe2851aa8 a2=74 a3=95 items=0 ppid=5162 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.111000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:53:46.111000 audit: BPF prog-id=248 op=UNLOAD Jan 20 23:53:46.111000 audit[5213]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=74 a3=95 items=0 ppid=5162 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.111000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:53:46.111000 audit: BPF prog-id=249 op=LOAD Jan 20 23:53:46.111000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe2851b08 a2=94 a3=2 items=0 ppid=5162 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.111000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:53:46.111000 audit: BPF prog-id=249 op=UNLOAD Jan 20 23:53:46.111000 audit[5213]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=70 a3=2 items=0 ppid=5162 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.111000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:53:46.128324 containerd[1670]: time="2026-01-20T23:53:46.128262000Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 23:53:46.219000 audit: BPF prog-id=250 op=LOAD Jan 20 23:53:46.219000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=4 a0=5 a1=ffffe2851ac8 a2=40 a3=ffffe2851af8 items=0 ppid=5162 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.219000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:53:46.219000 audit: BPF prog-id=250 op=UNLOAD Jan 20 23:53:46.219000 audit[5213]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=4 a1=57156c a2=40 a3=ffffe2851af8 items=0 ppid=5162 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.219000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:53:46.228000 audit: BPF prog-id=251 op=LOAD Jan 20 23:53:46.228000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe2851ad8 a2=94 a3=4 items=0 ppid=5162 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.228000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:53:46.228000 audit: BPF prog-id=251 op=UNLOAD Jan 20 23:53:46.228000 audit[5213]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=4 items=0 ppid=5162 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.228000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:53:46.229000 audit: BPF prog-id=252 op=LOAD Jan 20 23:53:46.229000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=6 a0=5 a1=ffffe2851918 a2=94 a3=5 items=0 ppid=5162 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.229000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:53:46.229000 audit: BPF prog-id=252 op=UNLOAD Jan 20 23:53:46.229000 audit[5213]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=6 a1=57156c a2=70 a3=5 items=0 ppid=5162 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.229000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:53:46.229000 audit: BPF prog-id=253 op=LOAD Jan 20 23:53:46.229000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe2851b48 a2=94 a3=6 items=0 ppid=5162 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.229000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:53:46.229000 audit: BPF prog-id=253 op=UNLOAD Jan 20 23:53:46.229000 audit[5213]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=70 a3=6 items=0 ppid=5162 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.229000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:53:46.229000 audit: BPF prog-id=254 op=LOAD Jan 20 23:53:46.229000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=5 a0=5 a1=ffffe2851318 a2=94 a3=83 items=0 ppid=5162 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.229000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:53:46.229000 audit: BPF prog-id=255 op=LOAD Jan 20 23:53:46.229000 audit[5213]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=7 a0=5 a1=ffffe28510d8 a2=94 a3=2 items=0 ppid=5162 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.229000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:53:46.229000 audit: BPF prog-id=255 op=UNLOAD Jan 20 23:53:46.229000 audit[5213]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=7 a1=57156c a2=c a3=0 items=0 ppid=5162 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.229000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:53:46.230000 audit: BPF prog-id=254 op=UNLOAD Jan 20 23:53:46.230000 audit[5213]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=5 a1=57156c a2=39cfe620 a3=39cf1b00 items=0 ppid=5162 pid=5213 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="bpftool" exe="/usr/bin/bpftool" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.230000 audit: PROCTITLE proctitle=627066746F6F6C002D2D6A736F6E002D2D7072657474790070726F670073686F770070696E6E6564002F7379732F66732F6270662F63616C69636F2F7864702F70726566696C7465725F76315F63616C69636F5F746D705F41 Jan 20 23:53:46.238000 audit: BPF prog-id=246 op=UNLOAD Jan 20 23:53:46.238000 audit[5162]: SYSCALL arch=c00000b7 syscall=35 success=yes exit=0 a0=ffffffffffffff9c a1=4000820400 a2=0 a3=0 items=0 ppid=4197 pid=5162 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="calico-node" exe="/usr/bin/calico-node" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.238000 audit: PROCTITLE proctitle=63616C69636F2D6E6F6465002D66656C6978 Jan 20 23:53:46.333000 audit[5283]: NETFILTER_CFG table=nat:133 family=2 entries=15 op=nft_register_chain pid=5283 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 23:53:46.333000 audit[5283]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5084 a0=3 a1=ffffde2b0e10 a2=0 a3=ffffa8f2bfa8 items=0 ppid=5162 pid=5283 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.333000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 23:53:46.335000 audit[5284]: NETFILTER_CFG table=mangle:134 family=2 entries=16 op=nft_register_chain pid=5284 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 23:53:46.335000 audit[5284]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=6868 a0=3 a1=ffffe02191f0 a2=0 a3=ffff8406ffa8 items=0 ppid=5162 pid=5284 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.335000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 23:53:46.347000 audit[5281]: NETFILTER_CFG table=raw:135 family=2 entries=21 op=nft_register_chain pid=5281 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 23:53:46.347000 audit[5281]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=8452 a0=3 a1=ffffe3bd7570 a2=0 a3=ffffaaecafa8 items=0 ppid=5162 pid=5281 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.347000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 23:53:46.367000 audit[5291]: NETFILTER_CFG table=filter:136 family=2 entries=327 op=nft_register_chain pid=5291 subj=system_u:system_r:kernel_t:s0 comm="iptables-nft-re" Jan 20 23:53:46.367000 audit[5291]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=193472 a0=3 a1=ffffcb25a780 a2=0 a3=ffff89a4efa8 items=0 ppid=5162 pid=5291 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-nft-re" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:53:46.367000 audit: PROCTITLE proctitle=69707461626C65732D6E66742D726573746F7265002D2D6E6F666C757368002D2D766572626F7365002D2D77616974003130002D2D776169742D696E74657276616C003530303030 Jan 20 23:53:46.669721 containerd[1670]: time="2026-01-20T23:53:46.669548467Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:53:46.672435 containerd[1670]: time="2026-01-20T23:53:46.672384155Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 23:53:46.672516 containerd[1670]: time="2026-01-20T23:53:46.672429915Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 23:53:46.672845 kubelet[2940]: E0120 23:53:46.672642 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 23:53:46.672845 kubelet[2940]: E0120 23:53:46.672805 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 23:53:46.673593 kubelet[2940]: E0120 23:53:46.672977 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:149aa4975ca74b8e859dd8df0df1ec0f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tgf5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-777bdc9c5f-sfvx4_calico-system(69215543-8df6-43c3-9b3c-95e532549500): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 23:53:46.675170 containerd[1670]: time="2026-01-20T23:53:46.675143523Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 23:53:47.015994 containerd[1670]: time="2026-01-20T23:53:47.015864377Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:53:47.017775 containerd[1670]: time="2026-01-20T23:53:47.017705182Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 23:53:47.018204 containerd[1670]: time="2026-01-20T23:53:47.017806183Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 23:53:47.018243 kubelet[2940]: E0120 23:53:47.017939 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 23:53:47.018243 kubelet[2940]: E0120 23:53:47.017987 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 23:53:47.018243 kubelet[2940]: E0120 23:53:47.018107 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgf5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-777bdc9c5f-sfvx4_calico-system(69215543-8df6-43c3-9b3c-95e532549500): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 23:53:47.019308 kubelet[2940]: E0120 23:53:47.019272 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-777bdc9c5f-sfvx4" podUID="69215543-8df6-43c3-9b3c-95e532549500" Jan 20 23:53:47.734769 systemd-networkd[1586]: vxlan.calico: Gained IPv6LL Jan 20 23:53:54.127243 containerd[1670]: time="2026-01-20T23:53:54.126879826Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 23:53:54.452665 containerd[1670]: time="2026-01-20T23:53:54.452540877Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:53:54.454141 containerd[1670]: time="2026-01-20T23:53:54.454060801Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 23:53:54.454192 containerd[1670]: time="2026-01-20T23:53:54.454160601Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 23:53:54.454478 kubelet[2940]: E0120 23:53:54.454362 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:53:54.454478 kubelet[2940]: E0120 23:53:54.454410 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:53:54.454946 kubelet[2940]: E0120 23:53:54.454889 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n26q6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-767c66c85d-nfdlb_calico-apiserver(0c4c02eb-8f55-425f-9205-33b16803b19e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 23:53:54.456198 kubelet[2940]: E0120 23:53:54.456150 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-767c66c85d-nfdlb" podUID="0c4c02eb-8f55-425f-9205-33b16803b19e" Jan 20 23:53:55.126571 containerd[1670]: time="2026-01-20T23:53:55.126530684Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 23:53:55.476006 containerd[1670]: time="2026-01-20T23:53:55.475933002Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:53:55.478315 containerd[1670]: time="2026-01-20T23:53:55.478260009Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 23:53:55.478390 containerd[1670]: time="2026-01-20T23:53:55.478334529Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 23:53:55.478549 kubelet[2940]: E0120 23:53:55.478502 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:53:55.478762 kubelet[2940]: E0120 23:53:55.478550 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:53:55.478762 kubelet[2940]: E0120 23:53:55.478691 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ggz5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-767c66c85d-czqtd_calico-apiserver(ba92b217-c758-44c6-b97e-3beb84feb1eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 23:53:55.480078 kubelet[2940]: E0120 23:53:55.480035 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-767c66c85d-czqtd" podUID="ba92b217-c758-44c6-b97e-3beb84feb1eb" Jan 20 23:53:56.127191 containerd[1670]: time="2026-01-20T23:53:56.127079384Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 23:53:56.475850 containerd[1670]: time="2026-01-20T23:53:56.475735621Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:53:56.477367 containerd[1670]: time="2026-01-20T23:53:56.477317265Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 23:53:56.477680 containerd[1670]: time="2026-01-20T23:53:56.477350305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 23:53:56.477819 kubelet[2940]: E0120 23:53:56.477763 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 23:53:56.477866 kubelet[2940]: E0120 23:53:56.477833 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 23:53:56.478063 kubelet[2940]: E0120 23:53:56.477981 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-td2cf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-f8rjq_calico-system(d9863653-2d98-4479-88c2-8614b7871a32): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 23:53:56.480243 containerd[1670]: time="2026-01-20T23:53:56.480215073Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 23:53:56.812026 containerd[1670]: time="2026-01-20T23:53:56.811800301Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:53:56.815020 containerd[1670]: time="2026-01-20T23:53:56.814938550Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 23:53:56.815115 containerd[1670]: time="2026-01-20T23:53:56.815061431Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 23:53:56.816148 kubelet[2940]: E0120 23:53:56.816088 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 23:53:56.816423 kubelet[2940]: E0120 23:53:56.816153 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 23:53:56.816423 kubelet[2940]: E0120 23:53:56.816269 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-td2cf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-f8rjq_calico-system(d9863653-2d98-4479-88c2-8614b7871a32): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 23:53:56.817548 kubelet[2940]: E0120 23:53:56.817504 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f8rjq" podUID="d9863653-2d98-4479-88c2-8614b7871a32" Jan 20 23:53:57.126425 containerd[1670]: time="2026-01-20T23:53:57.126285680Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 23:53:57.470049 containerd[1670]: time="2026-01-20T23:53:57.469966383Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:53:57.471262 containerd[1670]: time="2026-01-20T23:53:57.471225107Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 23:53:57.471331 containerd[1670]: time="2026-01-20T23:53:57.471264987Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 23:53:57.471511 kubelet[2940]: E0120 23:53:57.471433 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 23:53:57.471582 kubelet[2940]: E0120 23:53:57.471524 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 23:53:57.471841 kubelet[2940]: E0120 23:53:57.471687 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7dzwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-596dccccb4-mz7rs_calico-system(a2e09f63-e3b2-438b-a6ce-d2eefff60f3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 23:53:57.472930 kubelet[2940]: E0120 23:53:57.472897 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-596dccccb4-mz7rs" podUID="a2e09f63-e3b2-438b-a6ce-d2eefff60f3e" Jan 20 23:53:58.126314 containerd[1670]: time="2026-01-20T23:53:58.126253499Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 23:53:58.447204 containerd[1670]: time="2026-01-20T23:53:58.446951896Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:53:58.448732 containerd[1670]: time="2026-01-20T23:53:58.448700861Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 23:53:58.448885 containerd[1670]: time="2026-01-20T23:53:58.448743181Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 23:53:58.449049 kubelet[2940]: E0120 23:53:58.448968 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 23:53:58.449315 kubelet[2940]: E0120 23:53:58.449043 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 23:53:58.450366 kubelet[2940]: E0120 23:53:58.450289 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7rtl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-27t6c_calico-system(7524b8f6-4e20-4bc6-8860-ffd104203deb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 23:53:58.451676 kubelet[2940]: E0120 23:53:58.451628 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-27t6c" podUID="7524b8f6-4e20-4bc6-8860-ffd104203deb" Jan 20 23:54:01.127699 kubelet[2940]: E0120 23:54:01.127651 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-777bdc9c5f-sfvx4" podUID="69215543-8df6-43c3-9b3c-95e532549500" Jan 20 23:54:06.128702 kubelet[2940]: E0120 23:54:06.128655 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-767c66c85d-nfdlb" podUID="0c4c02eb-8f55-425f-9205-33b16803b19e" Jan 20 23:54:08.126699 kubelet[2940]: E0120 23:54:08.126613 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-767c66c85d-czqtd" podUID="ba92b217-c758-44c6-b97e-3beb84feb1eb" Jan 20 23:54:09.127109 kubelet[2940]: E0120 23:54:09.127064 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-596dccccb4-mz7rs" podUID="a2e09f63-e3b2-438b-a6ce-d2eefff60f3e" Jan 20 23:54:11.126354 kubelet[2940]: E0120 23:54:11.126300 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f8rjq" podUID="d9863653-2d98-4479-88c2-8614b7871a32" Jan 20 23:54:12.128106 containerd[1670]: time="2026-01-20T23:54:12.127648326Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 23:54:12.464416 containerd[1670]: time="2026-01-20T23:54:12.464237688Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:54:12.466104 containerd[1670]: time="2026-01-20T23:54:12.465971813Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 23:54:12.466104 containerd[1670]: time="2026-01-20T23:54:12.466061213Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 23:54:12.466246 kubelet[2940]: E0120 23:54:12.466199 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 23:54:12.466877 kubelet[2940]: E0120 23:54:12.466250 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 23:54:12.466877 kubelet[2940]: E0120 23:54:12.466366 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:149aa4975ca74b8e859dd8df0df1ec0f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tgf5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-777bdc9c5f-sfvx4_calico-system(69215543-8df6-43c3-9b3c-95e532549500): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 23:54:12.468972 containerd[1670]: time="2026-01-20T23:54:12.468719581Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 23:54:12.802753 containerd[1670]: time="2026-01-20T23:54:12.802523295Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:54:12.804692 containerd[1670]: time="2026-01-20T23:54:12.804612421Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 23:54:12.804779 containerd[1670]: time="2026-01-20T23:54:12.804682261Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 23:54:12.804925 kubelet[2940]: E0120 23:54:12.804858 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 23:54:12.804925 kubelet[2940]: E0120 23:54:12.804916 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 23:54:12.805096 kubelet[2940]: E0120 23:54:12.805023 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgf5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-777bdc9c5f-sfvx4_calico-system(69215543-8df6-43c3-9b3c-95e532549500): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 23:54:12.806473 kubelet[2940]: E0120 23:54:12.806301 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-777bdc9c5f-sfvx4" podUID="69215543-8df6-43c3-9b3c-95e532549500" Jan 20 23:54:13.127574 kubelet[2940]: E0120 23:54:13.126925 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-27t6c" podUID="7524b8f6-4e20-4bc6-8860-ffd104203deb" Jan 20 23:54:18.127275 containerd[1670]: time="2026-01-20T23:54:18.127231517Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 23:54:18.455655 containerd[1670]: time="2026-01-20T23:54:18.455605896Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:54:18.459382 containerd[1670]: time="2026-01-20T23:54:18.459289986Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 23:54:18.459572 containerd[1670]: time="2026-01-20T23:54:18.459334706Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 23:54:18.459845 kubelet[2940]: E0120 23:54:18.459570 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:54:18.459845 kubelet[2940]: E0120 23:54:18.459654 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:54:18.460710 kubelet[2940]: E0120 23:54:18.459805 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n26q6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-767c66c85d-nfdlb_calico-apiserver(0c4c02eb-8f55-425f-9205-33b16803b19e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 23:54:18.461554 kubelet[2940]: E0120 23:54:18.461369 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-767c66c85d-nfdlb" podUID="0c4c02eb-8f55-425f-9205-33b16803b19e" Jan 20 23:54:20.131400 containerd[1670]: time="2026-01-20T23:54:20.131340646Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 23:54:20.470671 containerd[1670]: time="2026-01-20T23:54:20.470622456Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:54:20.473142 containerd[1670]: time="2026-01-20T23:54:20.473092783Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 23:54:20.473222 containerd[1670]: time="2026-01-20T23:54:20.473132943Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 23:54:20.473399 kubelet[2940]: E0120 23:54:20.473359 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:54:20.473707 kubelet[2940]: E0120 23:54:20.473414 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:54:20.473707 kubelet[2940]: E0120 23:54:20.473564 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ggz5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-767c66c85d-czqtd_calico-apiserver(ba92b217-c758-44c6-b97e-3beb84feb1eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 23:54:20.475054 kubelet[2940]: E0120 23:54:20.475014 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-767c66c85d-czqtd" podUID="ba92b217-c758-44c6-b97e-3beb84feb1eb" Jan 20 23:54:23.127558 containerd[1670]: time="2026-01-20T23:54:23.127504852Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 23:54:23.477193 containerd[1670]: time="2026-01-20T23:54:23.477084891Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:54:23.479501 containerd[1670]: time="2026-01-20T23:54:23.479444338Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 23:54:23.479569 containerd[1670]: time="2026-01-20T23:54:23.479477498Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 23:54:23.479820 kubelet[2940]: E0120 23:54:23.479751 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 23:54:23.480909 kubelet[2940]: E0120 23:54:23.480279 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 23:54:23.480909 kubelet[2940]: E0120 23:54:23.480480 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-td2cf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-f8rjq_calico-system(d9863653-2d98-4479-88c2-8614b7871a32): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 23:54:23.481085 containerd[1670]: time="2026-01-20T23:54:23.480691661Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 23:54:23.815893 containerd[1670]: time="2026-01-20T23:54:23.815735579Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:54:23.817654 containerd[1670]: time="2026-01-20T23:54:23.817607344Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 23:54:23.817763 containerd[1670]: time="2026-01-20T23:54:23.817676225Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 23:54:23.817884 kubelet[2940]: E0120 23:54:23.817831 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 23:54:23.817933 kubelet[2940]: E0120 23:54:23.817885 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 23:54:23.818201 kubelet[2940]: E0120 23:54:23.818110 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7dzwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-596dccccb4-mz7rs_calico-system(a2e09f63-e3b2-438b-a6ce-d2eefff60f3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 23:54:23.818537 containerd[1670]: time="2026-01-20T23:54:23.818313546Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 23:54:23.819417 kubelet[2940]: E0120 23:54:23.819381 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-596dccccb4-mz7rs" podUID="a2e09f63-e3b2-438b-a6ce-d2eefff60f3e" Jan 20 23:54:24.189323 containerd[1670]: time="2026-01-20T23:54:24.189283967Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:54:24.190596 containerd[1670]: time="2026-01-20T23:54:24.190492130Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 23:54:24.190596 containerd[1670]: time="2026-01-20T23:54:24.190543491Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 23:54:24.190735 kubelet[2940]: E0120 23:54:24.190696 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 23:54:24.190819 kubelet[2940]: E0120 23:54:24.190746 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 23:54:24.190946 kubelet[2940]: E0120 23:54:24.190867 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-td2cf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-f8rjq_calico-system(d9863653-2d98-4479-88c2-8614b7871a32): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 23:54:24.192104 kubelet[2940]: E0120 23:54:24.192063 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f8rjq" podUID="d9863653-2d98-4479-88c2-8614b7871a32" Jan 20 23:54:27.126890 kubelet[2940]: E0120 23:54:27.126831 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-777bdc9c5f-sfvx4" podUID="69215543-8df6-43c3-9b3c-95e532549500" Jan 20 23:54:28.127207 containerd[1670]: time="2026-01-20T23:54:28.127112904Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 23:54:28.463342 containerd[1670]: time="2026-01-20T23:54:28.463237105Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:54:28.465025 containerd[1670]: time="2026-01-20T23:54:28.464957390Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 23:54:28.465105 containerd[1670]: time="2026-01-20T23:54:28.465037470Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 23:54:28.465291 kubelet[2940]: E0120 23:54:28.465256 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 23:54:28.466151 kubelet[2940]: E0120 23:54:28.465665 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 23:54:28.466341 kubelet[2940]: E0120 23:54:28.466275 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7rtl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-27t6c_calico-system(7524b8f6-4e20-4bc6-8860-ffd104203deb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 23:54:28.467654 kubelet[2940]: E0120 23:54:28.467626 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-27t6c" podUID="7524b8f6-4e20-4bc6-8860-ffd104203deb" Jan 20 23:54:33.126414 kubelet[2940]: E0120 23:54:33.126345 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-767c66c85d-czqtd" podUID="ba92b217-c758-44c6-b97e-3beb84feb1eb" Jan 20 23:54:34.127403 kubelet[2940]: E0120 23:54:34.126998 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-767c66c85d-nfdlb" podUID="0c4c02eb-8f55-425f-9205-33b16803b19e" Jan 20 23:54:34.128003 kubelet[2940]: E0120 23:54:34.127958 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-596dccccb4-mz7rs" podUID="a2e09f63-e3b2-438b-a6ce-d2eefff60f3e" Jan 20 23:54:39.127095 kubelet[2940]: E0120 23:54:39.127030 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f8rjq" podUID="d9863653-2d98-4479-88c2-8614b7871a32" Jan 20 23:54:39.127911 kubelet[2940]: E0120 23:54:39.127825 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-777bdc9c5f-sfvx4" podUID="69215543-8df6-43c3-9b3c-95e532549500" Jan 20 23:54:41.125950 kubelet[2940]: E0120 23:54:41.125898 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-27t6c" podUID="7524b8f6-4e20-4bc6-8860-ffd104203deb" Jan 20 23:54:47.125491 kubelet[2940]: E0120 23:54:47.125424 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-767c66c85d-czqtd" podUID="ba92b217-c758-44c6-b97e-3beb84feb1eb" Jan 20 23:54:48.126696 kubelet[2940]: E0120 23:54:48.126645 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-596dccccb4-mz7rs" podUID="a2e09f63-e3b2-438b-a6ce-d2eefff60f3e" Jan 20 23:54:49.125551 kubelet[2940]: E0120 23:54:49.125502 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-767c66c85d-nfdlb" podUID="0c4c02eb-8f55-425f-9205-33b16803b19e" Jan 20 23:54:52.127738 kubelet[2940]: E0120 23:54:52.127676 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f8rjq" podUID="d9863653-2d98-4479-88c2-8614b7871a32" Jan 20 23:54:54.126259 containerd[1670]: time="2026-01-20T23:54:54.126216311Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 23:54:54.127406 kubelet[2940]: E0120 23:54:54.126919 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-27t6c" podUID="7524b8f6-4e20-4bc6-8860-ffd104203deb" Jan 20 23:54:54.462798 containerd[1670]: time="2026-01-20T23:54:54.462750153Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:54:54.464343 containerd[1670]: time="2026-01-20T23:54:54.464297557Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 23:54:54.464546 containerd[1670]: time="2026-01-20T23:54:54.464333117Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 23:54:54.464669 kubelet[2940]: E0120 23:54:54.464576 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 23:54:54.464669 kubelet[2940]: E0120 23:54:54.464636 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 23:54:54.464789 kubelet[2940]: E0120 23:54:54.464749 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:149aa4975ca74b8e859dd8df0df1ec0f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tgf5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-777bdc9c5f-sfvx4_calico-system(69215543-8df6-43c3-9b3c-95e532549500): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 23:54:54.467148 containerd[1670]: time="2026-01-20T23:54:54.467117965Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 23:54:54.807442 containerd[1670]: time="2026-01-20T23:54:54.807217458Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:54:54.808871 containerd[1670]: time="2026-01-20T23:54:54.808785342Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 23:54:54.808959 containerd[1670]: time="2026-01-20T23:54:54.808892822Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 23:54:54.809698 kubelet[2940]: E0120 23:54:54.809126 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 23:54:54.809698 kubelet[2940]: E0120 23:54:54.809180 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 23:54:54.809698 kubelet[2940]: E0120 23:54:54.809295 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgf5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-777bdc9c5f-sfvx4_calico-system(69215543-8df6-43c3-9b3c-95e532549500): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 23:54:54.810497 kubelet[2940]: E0120 23:54:54.810448 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-777bdc9c5f-sfvx4" podUID="69215543-8df6-43c3-9b3c-95e532549500" Jan 20 23:55:01.126879 containerd[1670]: time="2026-01-20T23:55:01.126840044Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 23:55:01.468502 containerd[1670]: time="2026-01-20T23:55:01.468437860Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:55:01.469833 containerd[1670]: time="2026-01-20T23:55:01.469768864Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 23:55:01.469925 containerd[1670]: time="2026-01-20T23:55:01.469846504Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 23:55:01.470036 kubelet[2940]: E0120 23:55:01.469993 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:55:01.470337 kubelet[2940]: E0120 23:55:01.470038 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:55:01.470337 kubelet[2940]: E0120 23:55:01.470159 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ggz5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-767c66c85d-czqtd_calico-apiserver(ba92b217-c758-44c6-b97e-3beb84feb1eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 23:55:01.471374 kubelet[2940]: E0120 23:55:01.471311 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-767c66c85d-czqtd" podUID="ba92b217-c758-44c6-b97e-3beb84feb1eb" Jan 20 23:55:02.126493 kubelet[2940]: E0120 23:55:02.126396 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-596dccccb4-mz7rs" podUID="a2e09f63-e3b2-438b-a6ce-d2eefff60f3e" Jan 20 23:55:04.126441 containerd[1670]: time="2026-01-20T23:55:04.126313699Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 23:55:04.453870 containerd[1670]: time="2026-01-20T23:55:04.453565074Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:55:04.455163 containerd[1670]: time="2026-01-20T23:55:04.455014438Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 23:55:04.455163 containerd[1670]: time="2026-01-20T23:55:04.455103839Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 23:55:04.455369 kubelet[2940]: E0120 23:55:04.455295 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:55:04.455369 kubelet[2940]: E0120 23:55:04.455352 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:55:04.455732 kubelet[2940]: E0120 23:55:04.455506 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-n26q6,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-767c66c85d-nfdlb_calico-apiserver(0c4c02eb-8f55-425f-9205-33b16803b19e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 23:55:04.457544 kubelet[2940]: E0120 23:55:04.456982 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-767c66c85d-nfdlb" podUID="0c4c02eb-8f55-425f-9205-33b16803b19e" Jan 20 23:55:07.126085 kubelet[2940]: E0120 23:55:07.125969 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-27t6c" podUID="7524b8f6-4e20-4bc6-8860-ffd104203deb" Jan 20 23:55:07.126771 containerd[1670]: time="2026-01-20T23:55:07.126732876Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 23:55:07.484152 containerd[1670]: time="2026-01-20T23:55:07.484067378Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:55:07.485535 containerd[1670]: time="2026-01-20T23:55:07.485438462Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" Jan 20 23:55:07.486790 containerd[1670]: time="2026-01-20T23:55:07.486688305Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/csi:v3.30.4: active requests=0, bytes read=0" Jan 20 23:55:07.486936 kubelet[2940]: E0120 23:55:07.486890 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 23:55:07.486936 kubelet[2940]: E0120 23:55:07.486931 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" image="ghcr.io/flatcar/calico/csi:v3.30.4" Jan 20 23:55:07.487145 kubelet[2940]: E0120 23:55:07.487064 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-csi,Image:ghcr.io/flatcar/calico/csi:v3.30.4,Command:[],Args:[--nodeid=$(KUBE_NODE_NAME) --loglevel=$(LOG_LEVEL)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:warn,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kubelet-dir,ReadOnly:false,MountPath:/var/lib/kubelet,SubPath:,MountPropagation:*Bidirectional,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:varrun,ReadOnly:false,MountPath:/var/run,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-td2cf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-f8rjq_calico-system(d9863653-2d98-4479-88c2-8614b7871a32): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/csi:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found" logger="UnhandledError" Jan 20 23:55:07.488916 containerd[1670]: time="2026-01-20T23:55:07.488836191Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\"" Jan 20 23:55:07.821124 containerd[1670]: time="2026-01-20T23:55:07.820973421Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:55:07.823467 containerd[1670]: time="2026-01-20T23:55:07.823376828Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" Jan 20 23:55:07.824089 containerd[1670]: time="2026-01-20T23:55:07.823503668Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: active requests=0, bytes read=0" Jan 20 23:55:07.824160 kubelet[2940]: E0120 23:55:07.823672 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 23:55:07.824160 kubelet[2940]: E0120 23:55:07.823724 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" image="ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4" Jan 20 23:55:07.824160 kubelet[2940]: E0120 23:55:07.823896 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:csi-node-driver-registrar,Image:ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4,Command:[],Args:[--v=5 --csi-address=$(ADDRESS) --kubelet-registration-path=$(DRIVER_REG_SOCK_PATH)],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:ADDRESS,Value:/csi/csi.sock,ValueFrom:nil,},EnvVar{Name:DRIVER_REG_SOCK_PATH,Value:/var/lib/kubelet/plugins/csi.tigera.io/csi.sock,ValueFrom:nil,},EnvVar{Name:KUBE_NODE_NAME,Value:,ValueFrom:&EnvVarSource{FieldRef:&ObjectFieldSelector{APIVersion:v1,FieldPath:spec.nodeName,},ResourceFieldRef:nil,ConfigMapKeyRef:nil,SecretKeyRef:nil,},},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:registration-dir,ReadOnly:false,MountPath:/registration,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:socket-dir,ReadOnly:false,MountPath:/csi,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-td2cf,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*true,SELinuxOptions:nil,RunAsUser:*0,RunAsNonRoot:*false,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*true,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod csi-node-driver-f8rjq_calico-system(d9863653-2d98-4479-88c2-8614b7871a32): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found" logger="UnhandledError" Jan 20 23:55:07.825598 kubelet[2940]: E0120 23:55:07.825556 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f8rjq" podUID="d9863653-2d98-4479-88c2-8614b7871a32" Jan 20 23:55:10.130938 kubelet[2940]: E0120 23:55:10.130802 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-777bdc9c5f-sfvx4" podUID="69215543-8df6-43c3-9b3c-95e532549500" Jan 20 23:55:12.128637 kubelet[2940]: E0120 23:55:12.128574 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-767c66c85d-czqtd" podUID="ba92b217-c758-44c6-b97e-3beb84feb1eb" Jan 20 23:55:12.862316 systemd[1]: Started sshd@9-10.0.2.209:22-20.161.92.111:46244.service - OpenSSH per-connection server daemon (20.161.92.111:46244). Jan 20 23:55:12.862000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.2.209:22-20.161.92.111:46244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:12.867052 kernel: kauditd_printk_skb: 194 callbacks suppressed Jan 20 23:55:12.867104 kernel: audit: type=1130 audit(1768953312.862:738): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.2.209:22-20.161.92.111:46244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:13.415000 audit[5443]: USER_ACCT pid=5443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:13.416680 sshd[5443]: Accepted publickey for core from 20.161.92.111 port 46244 ssh2: RSA SHA256:AolhJ8Cq1lJNPaiSsM8U1fo/mZtfN0MsM/RIruU7I4I Jan 20 23:55:13.417850 sshd-session[5443]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:55:13.416000 audit[5443]: CRED_ACQ pid=5443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:13.423271 kernel: audit: type=1101 audit(1768953313.415:739): pid=5443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:13.423346 kernel: audit: type=1103 audit(1768953313.416:740): pid=5443 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:13.424827 systemd-logind[1651]: New session 11 of user core. Jan 20 23:55:13.425377 kernel: audit: type=1006 audit(1768953313.416:741): pid=5443 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=11 res=1 Jan 20 23:55:13.425422 kernel: audit: type=1300 audit(1768953313.416:741): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc52e7f30 a2=3 a3=0 items=0 ppid=1 pid=5443 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:13.416000 audit[5443]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffc52e7f30 a2=3 a3=0 items=0 ppid=1 pid=5443 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=11 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:13.428969 kernel: audit: type=1327 audit(1768953313.416:741): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:55:13.416000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:55:13.439708 systemd[1]: Started session-11.scope - Session 11 of User core. Jan 20 23:55:13.442000 audit[5443]: USER_START pid=5443 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:13.444000 audit[5447]: CRED_ACQ pid=5447 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:13.450477 kernel: audit: type=1105 audit(1768953313.442:742): pid=5443 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:13.450744 kernel: audit: type=1103 audit(1768953313.444:743): pid=5447 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:13.796933 sshd[5447]: Connection closed by 20.161.92.111 port 46244 Jan 20 23:55:13.797700 sshd-session[5443]: pam_unix(sshd:session): session closed for user core Jan 20 23:55:13.799000 audit[5443]: USER_END pid=5443 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:13.802740 systemd[1]: sshd@9-10.0.2.209:22-20.161.92.111:46244.service: Deactivated successfully. Jan 20 23:55:13.799000 audit[5443]: CRED_DISP pid=5443 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:13.805500 systemd[1]: session-11.scope: Deactivated successfully. Jan 20 23:55:13.806885 kernel: audit: type=1106 audit(1768953313.799:744): pid=5443 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:13.806956 kernel: audit: type=1104 audit(1768953313.799:745): pid=5443 uid=0 auid=500 ses=11 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:13.803000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@9-10.0.2.209:22-20.161.92.111:46244 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:13.807713 systemd-logind[1651]: Session 11 logged out. Waiting for processes to exit. Jan 20 23:55:13.808448 systemd-logind[1651]: Removed session 11. Jan 20 23:55:14.129162 containerd[1670]: time="2026-01-20T23:55:14.129010054Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\"" Jan 20 23:55:14.483221 containerd[1670]: time="2026-01-20T23:55:14.483152467Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:55:14.485044 containerd[1670]: time="2026-01-20T23:55:14.484965432Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" Jan 20 23:55:14.485424 containerd[1670]: time="2026-01-20T23:55:14.485051352Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/kube-controllers:v3.30.4: active requests=0, bytes read=0" Jan 20 23:55:14.485494 kubelet[2940]: E0120 23:55:14.485224 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 23:55:14.485494 kubelet[2940]: E0120 23:55:14.485273 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" image="ghcr.io/flatcar/calico/kube-controllers:v3.30.4" Jan 20 23:55:14.485494 kubelet[2940]: E0120 23:55:14.485423 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-kube-controllers,Image:ghcr.io/flatcar/calico/kube-controllers:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:KUBE_CONTROLLERS_CONFIG_NAME,Value:default,ValueFrom:nil,},EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:ENABLED_CONTROLLERS,Value:node,loadbalancer,ValueFrom:nil,},EnvVar{Name:DISABLE_KUBE_CONTROLLERS_CONFIG_API,Value:false,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:CA_CRT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:tigera-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/cert.pem,SubPath:ca-bundle.crt,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-7dzwt,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -l],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:10,TimeoutSeconds:10,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:6,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/usr/bin/check-status -r],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:10,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*999,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*0,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-kube-controllers-596dccccb4-mz7rs_calico-system(a2e09f63-e3b2-438b-a6ce-d2eefff60f3e): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found" logger="UnhandledError" Jan 20 23:55:14.486634 kubelet[2940]: E0120 23:55:14.486596 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-596dccccb4-mz7rs" podUID="a2e09f63-e3b2-438b-a6ce-d2eefff60f3e" Jan 20 23:55:16.125909 kubelet[2940]: E0120 23:55:16.125860 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-767c66c85d-nfdlb" podUID="0c4c02eb-8f55-425f-9205-33b16803b19e" Jan 20 23:55:18.902000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.2.209:22-20.161.92.111:46260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:18.902787 systemd[1]: Started sshd@10-10.0.2.209:22-20.161.92.111:46260.service - OpenSSH per-connection server daemon (20.161.92.111:46260). Jan 20 23:55:18.903504 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 23:55:18.903559 kernel: audit: type=1130 audit(1768953318.902:747): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.2.209:22-20.161.92.111:46260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:19.428000 audit[5473]: USER_ACCT pid=5473 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:19.432808 sshd[5473]: Accepted publickey for core from 20.161.92.111 port 46260 ssh2: RSA SHA256:AolhJ8Cq1lJNPaiSsM8U1fo/mZtfN0MsM/RIruU7I4I Jan 20 23:55:19.432000 audit[5473]: CRED_ACQ pid=5473 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:19.434254 sshd-session[5473]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:55:19.436432 kernel: audit: type=1101 audit(1768953319.428:748): pid=5473 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:19.436533 kernel: audit: type=1103 audit(1768953319.432:749): pid=5473 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:19.438480 kernel: audit: type=1006 audit(1768953319.432:750): pid=5473 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=12 res=1 Jan 20 23:55:19.432000 audit[5473]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff3a32860 a2=3 a3=0 items=0 ppid=1 pid=5473 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:19.442364 kernel: audit: type=1300 audit(1768953319.432:750): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff3a32860 a2=3 a3=0 items=0 ppid=1 pid=5473 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=12 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:19.432000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:55:19.444287 kernel: audit: type=1327 audit(1768953319.432:750): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:55:19.447516 systemd-logind[1651]: New session 12 of user core. Jan 20 23:55:19.457776 systemd[1]: Started session-12.scope - Session 12 of User core. Jan 20 23:55:19.460000 audit[5473]: USER_START pid=5473 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:19.465000 audit[5477]: CRED_ACQ pid=5477 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:19.468847 kernel: audit: type=1105 audit(1768953319.460:751): pid=5473 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:19.468963 kernel: audit: type=1103 audit(1768953319.465:752): pid=5477 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:19.779671 sshd[5477]: Connection closed by 20.161.92.111 port 46260 Jan 20 23:55:19.780174 sshd-session[5473]: pam_unix(sshd:session): session closed for user core Jan 20 23:55:19.781000 audit[5473]: USER_END pid=5473 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:19.785304 systemd[1]: sshd@10-10.0.2.209:22-20.161.92.111:46260.service: Deactivated successfully. Jan 20 23:55:19.781000 audit[5473]: CRED_DISP pid=5473 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:19.787681 systemd[1]: session-12.scope: Deactivated successfully. Jan 20 23:55:19.788833 kernel: audit: type=1106 audit(1768953319.781:753): pid=5473 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:19.788893 kernel: audit: type=1104 audit(1768953319.781:754): pid=5473 uid=0 auid=500 ses=12 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:19.785000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@10-10.0.2.209:22-20.161.92.111:46260 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:19.789662 systemd-logind[1651]: Session 12 logged out. Waiting for processes to exit. Jan 20 23:55:19.790658 systemd-logind[1651]: Removed session 12. Jan 20 23:55:20.127609 kubelet[2940]: E0120 23:55:20.127191 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f8rjq" podUID="d9863653-2d98-4479-88c2-8614b7871a32" Jan 20 23:55:21.126661 containerd[1670]: time="2026-01-20T23:55:21.126396778Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\"" Jan 20 23:55:21.465536 containerd[1670]: time="2026-01-20T23:55:21.465489307Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:55:21.466768 containerd[1670]: time="2026-01-20T23:55:21.466727591Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/goldmane:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" Jan 20 23:55:21.466910 containerd[1670]: time="2026-01-20T23:55:21.466767471Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/goldmane:v3.30.4: active requests=0, bytes read=0" Jan 20 23:55:21.467059 kubelet[2940]: E0120 23:55:21.466977 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 23:55:21.467541 kubelet[2940]: E0120 23:55:21.467083 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" image="ghcr.io/flatcar/calico/goldmane:v3.30.4" Jan 20 23:55:21.467541 kubelet[2940]: E0120 23:55:21.467249 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:goldmane,Image:ghcr.io/flatcar/calico/goldmane:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:7443,ValueFrom:nil,},EnvVar{Name:SERVER_CERT_PATH,Value:/goldmane-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:SERVER_KEY_PATH,Value:/goldmane-key-pair/tls.key,ValueFrom:nil,},EnvVar{Name:CA_CERT_PATH,Value:/etc/pki/tls/certs/tigera-ca-bundle.crt,ValueFrom:nil,},EnvVar{Name:PUSH_URL,Value:https://guardian.calico-system.svc.cluster.local:443/api/v1/flows/bulk,ValueFrom:nil,},EnvVar{Name:FILE_CONFIG_PATH,Value:/config/config.json,ValueFrom:nil,},EnvVar{Name:HEALTH_ENABLED,Value:true,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:config,ReadOnly:true,MountPath:/config,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:goldmane-key-pair,ReadOnly:true,MountPath:/goldmane-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-m7rtl,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -live],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:&ExecAction{Command:[/health -ready],},HTTPGet:nil,TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:30,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod goldmane-666569f655-27t6c_calico-system(7524b8f6-4e20-4bc6-8860-ffd104203deb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/goldmane:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found" logger="UnhandledError" Jan 20 23:55:21.468532 kubelet[2940]: E0120 23:55:21.468492 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-27t6c" podUID="7524b8f6-4e20-4bc6-8860-ffd104203deb" Jan 20 23:55:23.126879 kubelet[2940]: E0120 23:55:23.126758 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-777bdc9c5f-sfvx4" podUID="69215543-8df6-43c3-9b3c-95e532549500" Jan 20 23:55:24.126849 kubelet[2940]: E0120 23:55:24.126796 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-767c66c85d-czqtd" podUID="ba92b217-c758-44c6-b97e-3beb84feb1eb" Jan 20 23:55:24.888390 systemd[1]: Started sshd@11-10.0.2.209:22-20.161.92.111:41406.service - OpenSSH per-connection server daemon (20.161.92.111:41406). Jan 20 23:55:24.888000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.2.209:22-20.161.92.111:41406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:24.892630 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 23:55:24.892710 kernel: audit: type=1130 audit(1768953324.888:756): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.2.209:22-20.161.92.111:41406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:25.414000 audit[5505]: USER_ACCT pid=5505 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:25.414916 sshd[5505]: Accepted publickey for core from 20.161.92.111 port 41406 ssh2: RSA SHA256:AolhJ8Cq1lJNPaiSsM8U1fo/mZtfN0MsM/RIruU7I4I Jan 20 23:55:25.416000 audit[5505]: CRED_ACQ pid=5505 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:25.419844 sshd-session[5505]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:55:25.422482 kernel: audit: type=1101 audit(1768953325.414:757): pid=5505 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:25.422572 kernel: audit: type=1103 audit(1768953325.416:758): pid=5505 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:25.422593 kernel: audit: type=1006 audit(1768953325.416:759): pid=5505 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=13 res=1 Jan 20 23:55:25.416000 audit[5505]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffee204d80 a2=3 a3=0 items=0 ppid=1 pid=5505 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:25.427294 kernel: audit: type=1300 audit(1768953325.416:759): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffee204d80 a2=3 a3=0 items=0 ppid=1 pid=5505 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=13 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:25.416000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:55:25.429497 kernel: audit: type=1327 audit(1768953325.416:759): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:55:25.430427 systemd-logind[1651]: New session 13 of user core. Jan 20 23:55:25.441902 systemd[1]: Started session-13.scope - Session 13 of User core. Jan 20 23:55:25.444000 audit[5505]: USER_START pid=5505 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:25.446000 audit[5509]: CRED_ACQ pid=5509 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:25.452286 kernel: audit: type=1105 audit(1768953325.444:760): pid=5505 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:25.452365 kernel: audit: type=1103 audit(1768953325.446:761): pid=5509 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:25.769324 sshd[5509]: Connection closed by 20.161.92.111 port 41406 Jan 20 23:55:25.769752 sshd-session[5505]: pam_unix(sshd:session): session closed for user core Jan 20 23:55:25.772000 audit[5505]: USER_END pid=5505 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:25.775400 systemd[1]: sshd@11-10.0.2.209:22-20.161.92.111:41406.service: Deactivated successfully. Jan 20 23:55:25.772000 audit[5505]: CRED_DISP pid=5505 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:25.779016 systemd[1]: session-13.scope: Deactivated successfully. Jan 20 23:55:25.780085 kernel: audit: type=1106 audit(1768953325.772:762): pid=5505 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:25.780169 kernel: audit: type=1104 audit(1768953325.772:763): pid=5505 uid=0 auid=500 ses=13 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:25.775000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@11-10.0.2.209:22-20.161.92.111:41406 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:25.780560 systemd-logind[1651]: Session 13 logged out. Waiting for processes to exit. Jan 20 23:55:25.784324 systemd-logind[1651]: Removed session 13. Jan 20 23:55:25.876000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.2.209:22-20.161.92.111:41410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:25.876436 systemd[1]: Started sshd@12-10.0.2.209:22-20.161.92.111:41410.service - OpenSSH per-connection server daemon (20.161.92.111:41410). Jan 20 23:55:26.126573 kubelet[2940]: E0120 23:55:26.125440 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-596dccccb4-mz7rs" podUID="a2e09f63-e3b2-438b-a6ce-d2eefff60f3e" Jan 20 23:55:26.397000 audit[5523]: USER_ACCT pid=5523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:26.398807 sshd[5523]: Accepted publickey for core from 20.161.92.111 port 41410 ssh2: RSA SHA256:AolhJ8Cq1lJNPaiSsM8U1fo/mZtfN0MsM/RIruU7I4I Jan 20 23:55:26.400000 audit[5523]: CRED_ACQ pid=5523 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:26.400000 audit[5523]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffa8f0b00 a2=3 a3=0 items=0 ppid=1 pid=5523 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=14 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:26.400000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:55:26.401512 sshd-session[5523]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:55:26.406695 systemd-logind[1651]: New session 14 of user core. Jan 20 23:55:26.414644 systemd[1]: Started session-14.scope - Session 14 of User core. Jan 20 23:55:26.417000 audit[5523]: USER_START pid=5523 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:26.418000 audit[5527]: CRED_ACQ pid=5527 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:26.807486 sshd[5527]: Connection closed by 20.161.92.111 port 41410 Jan 20 23:55:26.807633 sshd-session[5523]: pam_unix(sshd:session): session closed for user core Jan 20 23:55:26.809000 audit[5523]: USER_END pid=5523 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:26.809000 audit[5523]: CRED_DISP pid=5523 uid=0 auid=500 ses=14 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:26.814166 systemd[1]: sshd@12-10.0.2.209:22-20.161.92.111:41410.service: Deactivated successfully. Jan 20 23:55:26.814000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@12-10.0.2.209:22-20.161.92.111:41410 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:26.816509 systemd[1]: session-14.scope: Deactivated successfully. Jan 20 23:55:26.818191 systemd-logind[1651]: Session 14 logged out. Waiting for processes to exit. Jan 20 23:55:26.822482 systemd-logind[1651]: Removed session 14. Jan 20 23:55:26.912430 systemd[1]: Started sshd@13-10.0.2.209:22-20.161.92.111:41416.service - OpenSSH per-connection server daemon (20.161.92.111:41416). Jan 20 23:55:26.912000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.2.209:22-20.161.92.111:41416 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:27.439000 audit[5541]: USER_ACCT pid=5541 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:27.440335 sshd[5541]: Accepted publickey for core from 20.161.92.111 port 41416 ssh2: RSA SHA256:AolhJ8Cq1lJNPaiSsM8U1fo/mZtfN0MsM/RIruU7I4I Jan 20 23:55:27.440000 audit[5541]: CRED_ACQ pid=5541 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:27.440000 audit[5541]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffff88dcf70 a2=3 a3=0 items=0 ppid=1 pid=5541 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=15 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:27.440000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:55:27.441982 sshd-session[5541]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:55:27.446038 systemd-logind[1651]: New session 15 of user core. Jan 20 23:55:27.451640 systemd[1]: Started session-15.scope - Session 15 of User core. Jan 20 23:55:27.454000 audit[5541]: USER_START pid=5541 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:27.455000 audit[5546]: CRED_ACQ pid=5546 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:27.790450 sshd[5546]: Connection closed by 20.161.92.111 port 41416 Jan 20 23:55:27.790973 sshd-session[5541]: pam_unix(sshd:session): session closed for user core Jan 20 23:55:27.792000 audit[5541]: USER_END pid=5541 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:27.792000 audit[5541]: CRED_DISP pid=5541 uid=0 auid=500 ses=15 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:27.795492 systemd[1]: sshd@13-10.0.2.209:22-20.161.92.111:41416.service: Deactivated successfully. Jan 20 23:55:27.795000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@13-10.0.2.209:22-20.161.92.111:41416 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:27.797295 systemd[1]: session-15.scope: Deactivated successfully. Jan 20 23:55:27.798179 systemd-logind[1651]: Session 15 logged out. Waiting for processes to exit. Jan 20 23:55:27.799294 systemd-logind[1651]: Removed session 15. Jan 20 23:55:28.126721 kubelet[2940]: E0120 23:55:28.126575 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-767c66c85d-nfdlb" podUID="0c4c02eb-8f55-425f-9205-33b16803b19e" Jan 20 23:55:32.901235 systemd[1]: Started sshd@14-10.0.2.209:22-20.161.92.111:42046.service - OpenSSH per-connection server daemon (20.161.92.111:42046). Jan 20 23:55:32.900000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.2.209:22-20.161.92.111:42046 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:32.902022 kernel: kauditd_printk_skb: 23 callbacks suppressed Jan 20 23:55:32.902083 kernel: audit: type=1130 audit(1768953332.900:783): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.2.209:22-20.161.92.111:42046 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:33.127310 kubelet[2940]: E0120 23:55:33.126954 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f8rjq" podUID="d9863653-2d98-4479-88c2-8614b7871a32" Jan 20 23:55:33.441000 audit[5560]: USER_ACCT pid=5560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:33.442477 sshd[5560]: Accepted publickey for core from 20.161.92.111 port 42046 ssh2: RSA SHA256:AolhJ8Cq1lJNPaiSsM8U1fo/mZtfN0MsM/RIruU7I4I Jan 20 23:55:33.445571 kernel: audit: type=1101 audit(1768953333.441:784): pid=5560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:33.445000 audit[5560]: CRED_ACQ pid=5560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:33.446383 sshd-session[5560]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:55:33.450422 kernel: audit: type=1103 audit(1768953333.445:785): pid=5560 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:33.450779 kernel: audit: type=1006 audit(1768953333.445:786): pid=5560 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=16 res=1 Jan 20 23:55:33.450873 kernel: audit: type=1300 audit(1768953333.445:786): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffea785ee0 a2=3 a3=0 items=0 ppid=1 pid=5560 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:33.445000 audit[5560]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffea785ee0 a2=3 a3=0 items=0 ppid=1 pid=5560 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=16 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:33.445000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:55:33.454212 systemd-logind[1651]: New session 16 of user core. Jan 20 23:55:33.455226 kernel: audit: type=1327 audit(1768953333.445:786): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:55:33.464702 systemd[1]: Started session-16.scope - Session 16 of User core. Jan 20 23:55:33.467000 audit[5560]: USER_START pid=5560 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:33.471000 audit[5564]: CRED_ACQ pid=5564 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:33.475107 kernel: audit: type=1105 audit(1768953333.467:787): pid=5560 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:33.475234 kernel: audit: type=1103 audit(1768953333.471:788): pid=5564 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:33.812084 sshd[5564]: Connection closed by 20.161.92.111 port 42046 Jan 20 23:55:33.812435 sshd-session[5560]: pam_unix(sshd:session): session closed for user core Jan 20 23:55:33.814000 audit[5560]: USER_END pid=5560 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:33.818921 systemd[1]: sshd@14-10.0.2.209:22-20.161.92.111:42046.service: Deactivated successfully. Jan 20 23:55:33.814000 audit[5560]: CRED_DISP pid=5560 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:33.821752 kernel: audit: type=1106 audit(1768953333.814:789): pid=5560 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:33.821843 kernel: audit: type=1104 audit(1768953333.814:790): pid=5560 uid=0 auid=500 ses=16 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:33.817000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@14-10.0.2.209:22-20.161.92.111:42046 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:33.822610 systemd[1]: session-16.scope: Deactivated successfully. Jan 20 23:55:33.823501 systemd-logind[1651]: Session 16 logged out. Waiting for processes to exit. Jan 20 23:55:33.825076 systemd-logind[1651]: Removed session 16. Jan 20 23:55:33.923000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.2.209:22-20.161.92.111:42060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:33.923904 systemd[1]: Started sshd@15-10.0.2.209:22-20.161.92.111:42060.service - OpenSSH per-connection server daemon (20.161.92.111:42060). Jan 20 23:55:34.460000 audit[5578]: USER_ACCT pid=5578 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:34.460953 sshd[5578]: Accepted publickey for core from 20.161.92.111 port 42060 ssh2: RSA SHA256:AolhJ8Cq1lJNPaiSsM8U1fo/mZtfN0MsM/RIruU7I4I Jan 20 23:55:34.461000 audit[5578]: CRED_ACQ pid=5578 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:34.461000 audit[5578]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe21b61c0 a2=3 a3=0 items=0 ppid=1 pid=5578 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=17 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:34.461000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:55:34.462723 sshd-session[5578]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:55:34.467804 systemd-logind[1651]: New session 17 of user core. Jan 20 23:55:34.473625 systemd[1]: Started session-17.scope - Session 17 of User core. Jan 20 23:55:34.475000 audit[5578]: USER_START pid=5578 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:34.477000 audit[5609]: CRED_ACQ pid=5609 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:34.884577 sshd[5609]: Connection closed by 20.161.92.111 port 42060 Jan 20 23:55:34.884656 sshd-session[5578]: pam_unix(sshd:session): session closed for user core Jan 20 23:55:34.886000 audit[5578]: USER_END pid=5578 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:34.886000 audit[5578]: CRED_DISP pid=5578 uid=0 auid=500 ses=17 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:34.890135 systemd[1]: sshd@15-10.0.2.209:22-20.161.92.111:42060.service: Deactivated successfully. Jan 20 23:55:34.891000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@15-10.0.2.209:22-20.161.92.111:42060 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:34.893409 systemd[1]: session-17.scope: Deactivated successfully. Jan 20 23:55:34.895270 systemd-logind[1651]: Session 17 logged out. Waiting for processes to exit. Jan 20 23:55:34.896144 systemd-logind[1651]: Removed session 17. Jan 20 23:55:34.993948 systemd[1]: Started sshd@16-10.0.2.209:22-20.161.92.111:42076.service - OpenSSH per-connection server daemon (20.161.92.111:42076). Jan 20 23:55:34.993000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.2.209:22-20.161.92.111:42076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:35.539000 audit[5620]: USER_ACCT pid=5620 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:35.540502 sshd[5620]: Accepted publickey for core from 20.161.92.111 port 42076 ssh2: RSA SHA256:AolhJ8Cq1lJNPaiSsM8U1fo/mZtfN0MsM/RIruU7I4I Jan 20 23:55:35.539000 audit[5620]: CRED_ACQ pid=5620 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:35.540000 audit[5620]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd6eca110 a2=3 a3=0 items=0 ppid=1 pid=5620 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=18 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:35.540000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:55:35.542178 sshd-session[5620]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:55:35.546645 systemd-logind[1651]: New session 18 of user core. Jan 20 23:55:35.552654 systemd[1]: Started session-18.scope - Session 18 of User core. Jan 20 23:55:35.553000 audit[5620]: USER_START pid=5620 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:35.556000 audit[5624]: CRED_ACQ pid=5624 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:36.127784 kubelet[2940]: E0120 23:55:36.127662 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-27t6c" podUID="7524b8f6-4e20-4bc6-8860-ffd104203deb" Jan 20 23:55:36.384000 audit[5635]: NETFILTER_CFG table=filter:137 family=2 entries=26 op=nft_register_rule pid=5635 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:55:36.384000 audit[5635]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffe2b10650 a2=0 a3=1 items=0 ppid=3104 pid=5635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:36.384000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:55:36.394000 audit[5635]: NETFILTER_CFG table=nat:138 family=2 entries=20 op=nft_register_rule pid=5635 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:55:36.394000 audit[5635]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffe2b10650 a2=0 a3=1 items=0 ppid=3104 pid=5635 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:36.394000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:55:36.423000 audit[5637]: NETFILTER_CFG table=filter:139 family=2 entries=38 op=nft_register_rule pid=5637 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:55:36.423000 audit[5637]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=14176 a0=3 a1=ffffda7e29c0 a2=0 a3=1 items=0 ppid=3104 pid=5637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:36.423000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:55:36.430000 audit[5637]: NETFILTER_CFG table=nat:140 family=2 entries=20 op=nft_register_rule pid=5637 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:55:36.430000 audit[5637]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5772 a0=3 a1=ffffda7e29c0 a2=0 a3=1 items=0 ppid=3104 pid=5637 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:36.430000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:55:36.494963 sshd[5624]: Connection closed by 20.161.92.111 port 42076 Jan 20 23:55:36.495388 sshd-session[5620]: pam_unix(sshd:session): session closed for user core Jan 20 23:55:36.495000 audit[5620]: USER_END pid=5620 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:36.495000 audit[5620]: CRED_DISP pid=5620 uid=0 auid=500 ses=18 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:36.499791 systemd[1]: sshd@16-10.0.2.209:22-20.161.92.111:42076.service: Deactivated successfully. Jan 20 23:55:36.498000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@16-10.0.2.209:22-20.161.92.111:42076 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:36.501677 systemd[1]: session-18.scope: Deactivated successfully. Jan 20 23:55:36.502566 systemd-logind[1651]: Session 18 logged out. Waiting for processes to exit. Jan 20 23:55:36.503948 systemd-logind[1651]: Removed session 18. Jan 20 23:55:36.609744 systemd[1]: Started sshd@17-10.0.2.209:22-20.161.92.111:42084.service - OpenSSH per-connection server daemon (20.161.92.111:42084). Jan 20 23:55:36.609000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.2.209:22-20.161.92.111:42084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:37.126223 kubelet[2940]: E0120 23:55:37.126038 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-767c66c85d-czqtd" podUID="ba92b217-c758-44c6-b97e-3beb84feb1eb" Jan 20 23:55:37.152000 audit[5644]: USER_ACCT pid=5644 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:37.153643 sshd[5644]: Accepted publickey for core from 20.161.92.111 port 42084 ssh2: RSA SHA256:AolhJ8Cq1lJNPaiSsM8U1fo/mZtfN0MsM/RIruU7I4I Jan 20 23:55:37.153000 audit[5644]: CRED_ACQ pid=5644 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:37.153000 audit[5644]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd092e1f0 a2=3 a3=0 items=0 ppid=1 pid=5644 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=19 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:37.153000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:55:37.155357 sshd-session[5644]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:55:37.159552 systemd-logind[1651]: New session 19 of user core. Jan 20 23:55:37.168683 systemd[1]: Started session-19.scope - Session 19 of User core. Jan 20 23:55:37.171000 audit[5644]: USER_START pid=5644 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:37.172000 audit[5648]: CRED_ACQ pid=5648 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:37.637504 sshd[5648]: Connection closed by 20.161.92.111 port 42084 Jan 20 23:55:37.637892 sshd-session[5644]: pam_unix(sshd:session): session closed for user core Jan 20 23:55:37.639000 audit[5644]: USER_END pid=5644 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:37.639000 audit[5644]: CRED_DISP pid=5644 uid=0 auid=500 ses=19 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:37.643024 systemd[1]: sshd@17-10.0.2.209:22-20.161.92.111:42084.service: Deactivated successfully. Jan 20 23:55:37.642000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@17-10.0.2.209:22-20.161.92.111:42084 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:37.645203 systemd[1]: session-19.scope: Deactivated successfully. Jan 20 23:55:37.646606 systemd-logind[1651]: Session 19 logged out. Waiting for processes to exit. Jan 20 23:55:37.648341 systemd-logind[1651]: Removed session 19. Jan 20 23:55:37.749000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.2.209:22-20.161.92.111:42098 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:37.750676 systemd[1]: Started sshd@18-10.0.2.209:22-20.161.92.111:42098.service - OpenSSH per-connection server daemon (20.161.92.111:42098). Jan 20 23:55:38.130093 kubelet[2940]: E0120 23:55:38.129995 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-777bdc9c5f-sfvx4" podUID="69215543-8df6-43c3-9b3c-95e532549500" Jan 20 23:55:38.266000 audit[5659]: USER_ACCT pid=5659 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:38.268929 kernel: kauditd_printk_skb: 47 callbacks suppressed Jan 20 23:55:38.268986 kernel: audit: type=1101 audit(1768953338.266:824): pid=5659 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:38.270043 sshd[5659]: Accepted publickey for core from 20.161.92.111 port 42098 ssh2: RSA SHA256:AolhJ8Cq1lJNPaiSsM8U1fo/mZtfN0MsM/RIruU7I4I Jan 20 23:55:38.271000 audit[5659]: CRED_ACQ pid=5659 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:38.273824 sshd-session[5659]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:55:38.275863 kernel: audit: type=1103 audit(1768953338.271:825): pid=5659 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:38.278151 kernel: audit: type=1006 audit(1768953338.271:826): pid=5659 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=20 res=1 Jan 20 23:55:38.271000 audit[5659]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd40cf320 a2=3 a3=0 items=0 ppid=1 pid=5659 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:38.281524 kernel: audit: type=1300 audit(1768953338.271:826): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd40cf320 a2=3 a3=0 items=0 ppid=1 pid=5659 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=20 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:38.271000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:55:38.282791 kernel: audit: type=1327 audit(1768953338.271:826): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:55:38.284505 systemd-logind[1651]: New session 20 of user core. Jan 20 23:55:38.289760 systemd[1]: Started session-20.scope - Session 20 of User core. Jan 20 23:55:38.291000 audit[5659]: USER_START pid=5659 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:38.293000 audit[5663]: CRED_ACQ pid=5663 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:38.298925 kernel: audit: type=1105 audit(1768953338.291:827): pid=5659 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:38.299061 kernel: audit: type=1103 audit(1768953338.293:828): pid=5663 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:38.620025 sshd[5663]: Connection closed by 20.161.92.111 port 42098 Jan 20 23:55:38.620684 sshd-session[5659]: pam_unix(sshd:session): session closed for user core Jan 20 23:55:38.621000 audit[5659]: USER_END pid=5659 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:38.625524 systemd[1]: sshd@18-10.0.2.209:22-20.161.92.111:42098.service: Deactivated successfully. Jan 20 23:55:38.621000 audit[5659]: CRED_DISP pid=5659 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:38.627313 systemd[1]: session-20.scope: Deactivated successfully. Jan 20 23:55:38.629435 kernel: audit: type=1106 audit(1768953338.621:829): pid=5659 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:38.629563 kernel: audit: type=1104 audit(1768953338.621:830): pid=5659 uid=0 auid=500 ses=20 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:38.625000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.2.209:22-20.161.92.111:42098 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:38.632305 kernel: audit: type=1131 audit(1768953338.625:831): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@18-10.0.2.209:22-20.161.92.111:42098 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:38.632540 systemd-logind[1651]: Session 20 logged out. Waiting for processes to exit. Jan 20 23:55:38.633331 systemd-logind[1651]: Removed session 20. Jan 20 23:55:39.127147 kubelet[2940]: E0120 23:55:39.127089 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-767c66c85d-nfdlb" podUID="0c4c02eb-8f55-425f-9205-33b16803b19e" Jan 20 23:55:40.366000 audit[5677]: NETFILTER_CFG table=filter:141 family=2 entries=26 op=nft_register_rule pid=5677 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:55:40.366000 audit[5677]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=5248 a0=3 a1=ffffea98b660 a2=0 a3=1 items=0 ppid=3104 pid=5677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:40.366000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:55:40.380000 audit[5677]: NETFILTER_CFG table=nat:142 family=2 entries=104 op=nft_register_chain pid=5677 subj=system_u:system_r:kernel_t:s0 comm="iptables-restor" Jan 20 23:55:40.380000 audit[5677]: SYSCALL arch=c00000b7 syscall=211 success=yes exit=48684 a0=3 a1=ffffea98b660 a2=0 a3=1 items=0 ppid=3104 pid=5677 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="iptables-restor" exe="/usr/sbin/xtables-nft-multi" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:40.380000 audit: PROCTITLE proctitle=69707461626C65732D726573746F7265002D770035002D5700313030303030002D2D6E6F666C757368002D2D636F756E74657273 Jan 20 23:55:41.125995 kubelet[2940]: E0120 23:55:41.125938 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-596dccccb4-mz7rs" podUID="a2e09f63-e3b2-438b-a6ce-d2eefff60f3e" Jan 20 23:55:43.736000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.2.209:22-20.161.92.111:56990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:43.738657 kernel: kauditd_printk_skb: 6 callbacks suppressed Jan 20 23:55:43.738695 kernel: audit: type=1130 audit(1768953343.736:834): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.2.209:22-20.161.92.111:56990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:43.737574 systemd[1]: Started sshd@19-10.0.2.209:22-20.161.92.111:56990.service - OpenSSH per-connection server daemon (20.161.92.111:56990). Jan 20 23:55:44.129825 kubelet[2940]: E0120 23:55:44.128857 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f8rjq" podUID="d9863653-2d98-4479-88c2-8614b7871a32" Jan 20 23:55:44.273000 audit[5679]: USER_ACCT pid=5679 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:44.275570 sshd[5679]: Accepted publickey for core from 20.161.92.111 port 56990 ssh2: RSA SHA256:AolhJ8Cq1lJNPaiSsM8U1fo/mZtfN0MsM/RIruU7I4I Jan 20 23:55:44.273000 audit[5679]: CRED_ACQ pid=5679 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:44.280475 sshd-session[5679]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:55:44.282659 kernel: audit: type=1101 audit(1768953344.273:835): pid=5679 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:44.282871 kernel: audit: type=1103 audit(1768953344.273:836): pid=5679 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:44.285186 kernel: audit: type=1006 audit(1768953344.273:837): pid=5679 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=21 res=1 Jan 20 23:55:44.285240 kernel: audit: type=1300 audit(1768953344.273:837): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffed3e9a0 a2=3 a3=0 items=0 ppid=1 pid=5679 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:44.273000 audit[5679]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=fffffed3e9a0 a2=3 a3=0 items=0 ppid=1 pid=5679 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=21 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:44.273000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:55:44.290221 kernel: audit: type=1327 audit(1768953344.273:837): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:55:44.294391 systemd-logind[1651]: New session 21 of user core. Jan 20 23:55:44.303743 systemd[1]: Started session-21.scope - Session 21 of User core. Jan 20 23:55:44.306000 audit[5679]: USER_START pid=5679 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:44.313492 kernel: audit: type=1105 audit(1768953344.306:838): pid=5679 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:44.312000 audit[5683]: CRED_ACQ pid=5683 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:44.317497 kernel: audit: type=1103 audit(1768953344.312:839): pid=5683 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:44.651759 sshd[5683]: Connection closed by 20.161.92.111 port 56990 Jan 20 23:55:44.652091 sshd-session[5679]: pam_unix(sshd:session): session closed for user core Jan 20 23:55:44.652000 audit[5679]: USER_END pid=5679 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:44.652000 audit[5679]: CRED_DISP pid=5679 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:44.658315 systemd[1]: sshd@19-10.0.2.209:22-20.161.92.111:56990.service: Deactivated successfully. Jan 20 23:55:44.660097 systemd[1]: session-21.scope: Deactivated successfully. Jan 20 23:55:44.661099 kernel: audit: type=1106 audit(1768953344.652:840): pid=5679 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:44.661195 kernel: audit: type=1104 audit(1768953344.652:841): pid=5679 uid=0 auid=500 ses=21 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:44.662221 systemd-logind[1651]: Session 21 logged out. Waiting for processes to exit. Jan 20 23:55:44.657000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@19-10.0.2.209:22-20.161.92.111:56990 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:44.665558 systemd-logind[1651]: Removed session 21. Jan 20 23:55:49.766410 systemd[1]: Started sshd@20-10.0.2.209:22-20.161.92.111:56996.service - OpenSSH per-connection server daemon (20.161.92.111:56996). Jan 20 23:55:49.765000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.2.209:22-20.161.92.111:56996 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:49.770321 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 23:55:49.770410 kernel: audit: type=1130 audit(1768953349.765:843): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.2.209:22-20.161.92.111:56996 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:50.307000 audit[5697]: USER_ACCT pid=5697 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:50.308915 sshd[5697]: Accepted publickey for core from 20.161.92.111 port 56996 ssh2: RSA SHA256:AolhJ8Cq1lJNPaiSsM8U1fo/mZtfN0MsM/RIruU7I4I Jan 20 23:55:50.313495 kernel: audit: type=1101 audit(1768953350.307:844): pid=5697 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:50.312000 audit[5697]: CRED_ACQ pid=5697 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:50.314656 sshd-session[5697]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:55:50.318556 kernel: audit: type=1103 audit(1768953350.312:845): pid=5697 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:50.319549 kernel: audit: type=1006 audit(1768953350.312:846): pid=5697 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=22 res=1 Jan 20 23:55:50.319581 kernel: audit: type=1300 audit(1768953350.312:846): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcc42dba0 a2=3 a3=0 items=0 ppid=1 pid=5697 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:50.312000 audit[5697]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffcc42dba0 a2=3 a3=0 items=0 ppid=1 pid=5697 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=22 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:50.321891 kernel: audit: type=1327 audit(1768953350.312:846): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:55:50.312000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:55:50.321648 systemd-logind[1651]: New session 22 of user core. Jan 20 23:55:50.330647 systemd[1]: Started session-22.scope - Session 22 of User core. Jan 20 23:55:50.331000 audit[5697]: USER_START pid=5697 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:50.335000 audit[5701]: CRED_ACQ pid=5701 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:50.340125 kernel: audit: type=1105 audit(1768953350.331:847): pid=5697 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:50.340222 kernel: audit: type=1103 audit(1768953350.335:848): pid=5701 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:50.710002 sshd[5701]: Connection closed by 20.161.92.111 port 56996 Jan 20 23:55:50.709336 sshd-session[5697]: pam_unix(sshd:session): session closed for user core Jan 20 23:55:50.709000 audit[5697]: USER_END pid=5697 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:50.714053 systemd[1]: sshd@20-10.0.2.209:22-20.161.92.111:56996.service: Deactivated successfully. Jan 20 23:55:50.709000 audit[5697]: CRED_DISP pid=5697 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:50.717795 systemd[1]: session-22.scope: Deactivated successfully. Jan 20 23:55:50.718622 kernel: audit: type=1106 audit(1768953350.709:849): pid=5697 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:50.718691 kernel: audit: type=1104 audit(1768953350.709:850): pid=5697 uid=0 auid=500 ses=22 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:50.712000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@20-10.0.2.209:22-20.161.92.111:56996 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:50.719055 systemd-logind[1651]: Session 22 logged out. Waiting for processes to exit. Jan 20 23:55:50.720322 systemd-logind[1651]: Removed session 22. Jan 20 23:55:51.126535 kubelet[2940]: E0120 23:55:51.126372 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-27t6c" podUID="7524b8f6-4e20-4bc6-8860-ffd104203deb" Jan 20 23:55:52.129076 kubelet[2940]: E0120 23:55:52.128816 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-767c66c85d-czqtd" podUID="ba92b217-c758-44c6-b97e-3beb84feb1eb" Jan 20 23:55:52.129592 kubelet[2940]: E0120 23:55:52.129300 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-777bdc9c5f-sfvx4" podUID="69215543-8df6-43c3-9b3c-95e532549500" Jan 20 23:55:54.126058 kubelet[2940]: E0120 23:55:54.126004 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-767c66c85d-nfdlb" podUID="0c4c02eb-8f55-425f-9205-33b16803b19e" Jan 20 23:55:55.125742 kubelet[2940]: E0120 23:55:55.125697 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-596dccccb4-mz7rs" podUID="a2e09f63-e3b2-438b-a6ce-d2eefff60f3e" Jan 20 23:55:55.814296 systemd[1]: Started sshd@21-10.0.2.209:22-20.161.92.111:34150.service - OpenSSH per-connection server daemon (20.161.92.111:34150). Jan 20 23:55:55.812000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.2.209:22-20.161.92.111:34150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:55.815506 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 23:55:55.815564 kernel: audit: type=1130 audit(1768953355.812:852): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.2.209:22-20.161.92.111:34150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:56.338000 audit[5715]: USER_ACCT pid=5715 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:56.339883 sshd[5715]: Accepted publickey for core from 20.161.92.111 port 34150 ssh2: RSA SHA256:AolhJ8Cq1lJNPaiSsM8U1fo/mZtfN0MsM/RIruU7I4I Jan 20 23:55:56.340000 audit[5715]: CRED_ACQ pid=5715 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:56.342787 sshd-session[5715]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:55:56.345571 kernel: audit: type=1101 audit(1768953356.338:853): pid=5715 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:56.345630 kernel: audit: type=1103 audit(1768953356.340:854): pid=5715 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:56.345655 kernel: audit: type=1006 audit(1768953356.340:855): pid=5715 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=23 res=1 Jan 20 23:55:56.347134 systemd-logind[1651]: New session 23 of user core. Jan 20 23:55:56.347362 kernel: audit: type=1300 audit(1768953356.340:855): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd6455f30 a2=3 a3=0 items=0 ppid=1 pid=5715 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:56.340000 audit[5715]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffd6455f30 a2=3 a3=0 items=0 ppid=1 pid=5715 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=23 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:55:56.350495 kernel: audit: type=1327 audit(1768953356.340:855): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:55:56.340000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:55:56.356652 systemd[1]: Started session-23.scope - Session 23 of User core. Jan 20 23:55:56.357000 audit[5715]: USER_START pid=5715 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:56.359000 audit[5719]: CRED_ACQ pid=5719 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:56.365396 kernel: audit: type=1105 audit(1768953356.357:856): pid=5715 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:56.365442 kernel: audit: type=1103 audit(1768953356.359:857): pid=5719 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:56.686314 sshd[5719]: Connection closed by 20.161.92.111 port 34150 Jan 20 23:55:56.686786 sshd-session[5715]: pam_unix(sshd:session): session closed for user core Jan 20 23:55:56.686000 audit[5715]: USER_END pid=5715 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:56.692013 systemd-logind[1651]: Session 23 logged out. Waiting for processes to exit. Jan 20 23:55:56.686000 audit[5715]: CRED_DISP pid=5715 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:56.692778 systemd[1]: sshd@21-10.0.2.209:22-20.161.92.111:34150.service: Deactivated successfully. Jan 20 23:55:56.694687 systemd[1]: session-23.scope: Deactivated successfully. Jan 20 23:55:56.694883 kernel: audit: type=1106 audit(1768953356.686:858): pid=5715 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:56.694966 kernel: audit: type=1104 audit(1768953356.686:859): pid=5715 uid=0 auid=500 ses=23 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:55:56.691000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@21-10.0.2.209:22-20.161.92.111:34150 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:55:56.696158 systemd-logind[1651]: Removed session 23. Jan 20 23:55:58.127013 kubelet[2940]: E0120 23:55:58.126960 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f8rjq" podUID="d9863653-2d98-4479-88c2-8614b7871a32" Jan 20 23:56:01.799352 systemd[1]: Started sshd@22-10.0.2.209:22-20.161.92.111:34164.service - OpenSSH per-connection server daemon (20.161.92.111:34164). Jan 20 23:56:01.798000 audit[1]: SERVICE_START pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.2.209:22-20.161.92.111:34164 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:56:01.800954 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 23:56:01.801007 kernel: audit: type=1130 audit(1768953361.798:861): pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.2.209:22-20.161.92.111:34164 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:56:02.345000 audit[5735]: USER_ACCT pid=5735 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:56:02.347060 sshd[5735]: Accepted publickey for core from 20.161.92.111 port 34164 ssh2: RSA SHA256:AolhJ8Cq1lJNPaiSsM8U1fo/mZtfN0MsM/RIruU7I4I Jan 20 23:56:02.350491 kernel: audit: type=1101 audit(1768953362.345:862): pid=5735 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:accounting grantors=pam_access,pam_time,pam_unix,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:56:02.349000 audit[5735]: CRED_ACQ pid=5735 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:56:02.351817 sshd-session[5735]: pam_unix(sshd:session): session opened for user core(uid=500) by core(uid=0) Jan 20 23:56:02.355325 kernel: audit: type=1103 audit(1768953362.349:863): pid=5735 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:56:02.355379 kernel: audit: type=1006 audit(1768953362.349:864): pid=5735 uid=0 subj=system_u:system_r:kernel_t:s0 old-auid=4294967295 auid=500 tty=(none) old-ses=4294967295 ses=24 res=1 Jan 20 23:56:02.349000 audit[5735]: SYSCALL arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe9e70d00 a2=3 a3=0 items=0 ppid=1 pid=5735 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:02.358724 kernel: audit: type=1300 audit(1768953362.349:864): arch=c00000b7 syscall=64 success=yes exit=3 a0=8 a1=ffffe9e70d00 a2=3 a3=0 items=0 ppid=1 pid=5735 auid=500 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=24 comm="sshd-session" exe="/usr/lib64/misc/sshd-session" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:02.358776 kernel: audit: type=1327 audit(1768953362.349:864): proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:56:02.349000 audit: PROCTITLE proctitle=737368642D73657373696F6E3A20636F7265205B707269765D Jan 20 23:56:02.363826 systemd-logind[1651]: New session 24 of user core. Jan 20 23:56:02.370668 systemd[1]: Started session-24.scope - Session 24 of User core. Jan 20 23:56:02.372000 audit[5735]: USER_START pid=5735 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:56:02.374000 audit[5739]: CRED_ACQ pid=5739 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:56:02.380581 kernel: audit: type=1105 audit(1768953362.372:865): pid=5735 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_open grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:56:02.380645 kernel: audit: type=1103 audit(1768953362.374:866): pid=5739 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:56:02.714283 sshd[5739]: Connection closed by 20.161.92.111 port 34164 Jan 20 23:56:02.714658 sshd-session[5735]: pam_unix(sshd:session): session closed for user core Jan 20 23:56:02.715000 audit[5735]: USER_END pid=5735 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:56:02.721114 systemd[1]: sshd@22-10.0.2.209:22-20.161.92.111:34164.service: Deactivated successfully. Jan 20 23:56:02.715000 audit[5735]: CRED_DISP pid=5735 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:56:02.724057 systemd[1]: session-24.scope: Deactivated successfully. Jan 20 23:56:02.724796 kernel: audit: type=1106 audit(1768953362.715:867): pid=5735 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:session_close grantors=pam_loginuid,pam_env,pam_namespace,pam_keyinit,pam_limits,pam_env,pam_umask,pam_unix,pam_systemd,pam_lastlog,pam_mail acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:56:02.724876 kernel: audit: type=1104 audit(1768953362.715:868): pid=5735 uid=0 auid=500 ses=24 subj=system_u:system_r:kernel_t:s0 msg='op=PAM:setcred grantors=pam_env,pam_faillock acct="core" exe="/usr/lib64/misc/sshd-session" hostname=20.161.92.111 addr=20.161.92.111 terminal=ssh res=success' Jan 20 23:56:02.721000 audit[1]: SERVICE_STOP pid=1 uid=0 auid=4294967295 ses=4294967295 subj=system_u:system_r:kernel_t:s0 msg='unit=sshd@22-10.0.2.209:22-20.161.92.111:34164 comm="systemd" exe="/usr/lib/systemd/systemd" hostname=? addr=? terminal=? res=success' Jan 20 23:56:02.725138 systemd-logind[1651]: Session 24 logged out. Waiting for processes to exit. Jan 20 23:56:02.726391 systemd-logind[1651]: Removed session 24. Jan 20 23:56:03.126720 kubelet[2940]: E0120 23:56:03.126603 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-767c66c85d-czqtd" podUID="ba92b217-c758-44c6-b97e-3beb84feb1eb" Jan 20 23:56:04.125991 kubelet[2940]: E0120 23:56:04.125928 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-27t6c" podUID="7524b8f6-4e20-4bc6-8860-ffd104203deb" Jan 20 23:56:06.127494 kubelet[2940]: E0120 23:56:06.126852 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-777bdc9c5f-sfvx4" podUID="69215543-8df6-43c3-9b3c-95e532549500" Jan 20 23:56:09.126202 kubelet[2940]: E0120 23:56:09.126144 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-767c66c85d-nfdlb" podUID="0c4c02eb-8f55-425f-9205-33b16803b19e" Jan 20 23:56:10.127142 kubelet[2940]: E0120 23:56:10.127078 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-596dccccb4-mz7rs" podUID="a2e09f63-e3b2-438b-a6ce-d2eefff60f3e" Jan 20 23:56:12.130481 kubelet[2940]: E0120 23:56:12.128990 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f8rjq" podUID="d9863653-2d98-4479-88c2-8614b7871a32" Jan 20 23:56:15.125660 kubelet[2940]: E0120 23:56:15.125599 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-27t6c" podUID="7524b8f6-4e20-4bc6-8860-ffd104203deb" Jan 20 23:56:16.126323 kubelet[2940]: E0120 23:56:16.126010 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-767c66c85d-czqtd" podUID="ba92b217-c758-44c6-b97e-3beb84feb1eb" Jan 20 23:56:19.126043 containerd[1670]: time="2026-01-20T23:56:19.126002225Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\"" Jan 20 23:56:19.465277 containerd[1670]: time="2026-01-20T23:56:19.465054235Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:56:19.466914 containerd[1670]: time="2026-01-20T23:56:19.466824840Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" Jan 20 23:56:19.466914 containerd[1670]: time="2026-01-20T23:56:19.466883760Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker:v3.30.4: active requests=0, bytes read=0" Jan 20 23:56:19.467171 kubelet[2940]: E0120 23:56:19.467139 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 23:56:19.467609 kubelet[2940]: E0120 23:56:19.467316 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker:v3.30.4" Jan 20 23:56:19.467889 kubelet[2940]: E0120 23:56:19.467838 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker,Image:ghcr.io/flatcar/calico/whisker:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:CALICO_VERSION,Value:v3.30.4,ValueFrom:nil,},EnvVar{Name:CLUSTER_ID,Value:149aa4975ca74b8e859dd8df0df1ec0f,ValueFrom:nil,},EnvVar{Name:CLUSTER_TYPE,Value:typha,kdd,k8s,operator,bgp,kubeadm,ValueFrom:nil,},EnvVar{Name:NOTIFICATIONS,Value:Enabled,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:kube-api-access-tgf5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-777bdc9c5f-sfvx4_calico-system(69215543-8df6-43c3-9b3c-95e532549500): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found" logger="UnhandledError" Jan 20 23:56:19.469726 containerd[1670]: time="2026-01-20T23:56:19.469699208Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\"" Jan 20 23:56:19.820594 containerd[1670]: time="2026-01-20T23:56:19.820325490Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:56:19.822277 containerd[1670]: time="2026-01-20T23:56:19.822237696Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" Jan 20 23:56:19.822347 containerd[1670]: time="2026-01-20T23:56:19.822321536Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/whisker-backend:v3.30.4: active requests=0, bytes read=0" Jan 20 23:56:19.822619 kubelet[2940]: E0120 23:56:19.822492 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 23:56:19.822619 kubelet[2940]: E0120 23:56:19.822547 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" image="ghcr.io/flatcar/calico/whisker-backend:v3.30.4" Jan 20 23:56:19.822824 kubelet[2940]: E0120 23:56:19.822788 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:whisker-backend,Image:ghcr.io/flatcar/calico/whisker-backend:v3.30.4,Command:[],Args:[],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:LOG_LEVEL,Value:INFO,ValueFrom:nil,},EnvVar{Name:PORT,Value:3002,ValueFrom:nil,},EnvVar{Name:GOLDMANE_HOST,Value:goldmane.calico-system.svc.cluster.local:7443,ValueFrom:nil,},EnvVar{Name:TLS_CERT_PATH,Value:/whisker-backend-key-pair/tls.crt,ValueFrom:nil,},EnvVar{Name:TLS_KEY_PATH,Value:/whisker-backend-key-pair/tls.key,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:whisker-backend-key-pair,ReadOnly:true,MountPath:/whisker-backend-key-pair,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:whisker-ca-bundle,ReadOnly:true,MountPath:/etc/pki/tls/certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-tgf5q,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:nil,Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod whisker-777bdc9c5f-sfvx4_calico-system(69215543-8df6-43c3-9b3c-95e532549500): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found" logger="UnhandledError" Jan 20 23:56:19.824244 kubelet[2940]: E0120 23:56:19.824065 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-777bdc9c5f-sfvx4" podUID="69215543-8df6-43c3-9b3c-95e532549500" Jan 20 23:56:23.125857 kubelet[2940]: E0120 23:56:23.125807 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-767c66c85d-nfdlb" podUID="0c4c02eb-8f55-425f-9205-33b16803b19e" Jan 20 23:56:23.126685 kubelet[2940]: E0120 23:56:23.126624 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"calico-csi\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/csi:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/csi:v3.30.4: not found\", failed to \"StartContainer\" for \"csi-node-driver-registrar\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/node-driver-registrar:v3.30.4: not found\"]" pod="calico-system/csi-node-driver-f8rjq" podUID="d9863653-2d98-4479-88c2-8614b7871a32" Jan 20 23:56:24.126402 kubelet[2940]: E0120 23:56:24.126358 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-kube-controllers\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/kube-controllers:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/kube-controllers:v3.30.4: not found\"" pod="calico-system/calico-kube-controllers-596dccccb4-mz7rs" podUID="a2e09f63-e3b2-438b-a6ce-d2eefff60f3e" Jan 20 23:56:25.931617 systemd[1]: cri-containerd-cf9770ba8065ddc3f14caa7f069636d40cd69798a9f8d13f93b556b9dab42c86.scope: Deactivated successfully. Jan 20 23:56:25.932142 systemd[1]: cri-containerd-cf9770ba8065ddc3f14caa7f069636d40cd69798a9f8d13f93b556b9dab42c86.scope: Consumed 34.597s CPU time, 102.3M memory peak. Jan 20 23:56:25.933394 containerd[1670]: time="2026-01-20T23:56:25.933339326Z" level=info msg="received container exit event container_id:\"cf9770ba8065ddc3f14caa7f069636d40cd69798a9f8d13f93b556b9dab42c86\" id:\"cf9770ba8065ddc3f14caa7f069636d40cd69798a9f8d13f93b556b9dab42c86\" pid:3278 exit_status:1 exited_at:{seconds:1768953385 nanos:932790204}" Jan 20 23:56:25.934000 audit: BPF prog-id=146 op=UNLOAD Jan 20 23:56:25.937234 kernel: kauditd_printk_skb: 1 callbacks suppressed Jan 20 23:56:25.937315 kernel: audit: type=1334 audit(1768953385.934:870): prog-id=146 op=UNLOAD Jan 20 23:56:25.937351 kernel: audit: type=1334 audit(1768953385.934:871): prog-id=150 op=UNLOAD Jan 20 23:56:25.934000 audit: BPF prog-id=150 op=UNLOAD Jan 20 23:56:25.954185 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-cf9770ba8065ddc3f14caa7f069636d40cd69798a9f8d13f93b556b9dab42c86-rootfs.mount: Deactivated successfully. Jan 20 23:56:26.126642 kubelet[2940]: E0120 23:56:26.126563 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"goldmane\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/goldmane:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/goldmane:v3.30.4: not found\"" pod="calico-system/goldmane-666569f655-27t6c" podUID="7524b8f6-4e20-4bc6-8860-ffd104203deb" Jan 20 23:56:26.202420 kubelet[2940]: E0120 23:56:26.202248 2940 controller.go:195] "Failed to update lease" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.2.209:47136->10.0.2.187:2379: read: connection timed out" Jan 20 23:56:26.206269 systemd[1]: cri-containerd-02ef16ecc5dea6e9a8ded426c3fa79343cda1cd435b1cb0b97e5195d66406b37.scope: Deactivated successfully. Jan 20 23:56:26.207112 systemd[1]: cri-containerd-02ef16ecc5dea6e9a8ded426c3fa79343cda1cd435b1cb0b97e5195d66406b37.scope: Consumed 3.051s CPU time, 24M memory peak. Jan 20 23:56:26.206000 audit: BPF prog-id=256 op=LOAD Jan 20 23:56:26.206000 audit: BPF prog-id=88 op=UNLOAD Jan 20 23:56:26.209697 containerd[1670]: time="2026-01-20T23:56:26.209554315Z" level=info msg="received container exit event container_id:\"02ef16ecc5dea6e9a8ded426c3fa79343cda1cd435b1cb0b97e5195d66406b37\" id:\"02ef16ecc5dea6e9a8ded426c3fa79343cda1cd435b1cb0b97e5195d66406b37\" pid:2784 exit_status:1 exited_at:{seconds:1768953386 nanos:208959554}" Jan 20 23:56:26.209804 kernel: audit: type=1334 audit(1768953386.206:872): prog-id=256 op=LOAD Jan 20 23:56:26.209852 kernel: audit: type=1334 audit(1768953386.206:873): prog-id=88 op=UNLOAD Jan 20 23:56:26.212000 audit: BPF prog-id=103 op=UNLOAD Jan 20 23:56:26.212000 audit: BPF prog-id=107 op=UNLOAD Jan 20 23:56:26.216150 kernel: audit: type=1334 audit(1768953386.212:874): prog-id=103 op=UNLOAD Jan 20 23:56:26.216225 kernel: audit: type=1334 audit(1768953386.212:875): prog-id=107 op=UNLOAD Jan 20 23:56:26.231644 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-02ef16ecc5dea6e9a8ded426c3fa79343cda1cd435b1cb0b97e5195d66406b37-rootfs.mount: Deactivated successfully. Jan 20 23:56:26.377000 audit: BPF prog-id=257 op=LOAD Jan 20 23:56:26.378237 systemd[1]: cri-containerd-5f99e184344ef00ad6b782963eca67a8619e168f3c528e8344c5a98d8a7a5a93.scope: Deactivated successfully. Jan 20 23:56:26.378589 systemd[1]: cri-containerd-5f99e184344ef00ad6b782963eca67a8619e168f3c528e8344c5a98d8a7a5a93.scope: Consumed 3.546s CPU time, 63.7M memory peak. Jan 20 23:56:26.380072 containerd[1670]: time="2026-01-20T23:56:26.379843482Z" level=info msg="received container exit event container_id:\"5f99e184344ef00ad6b782963eca67a8619e168f3c528e8344c5a98d8a7a5a93\" id:\"5f99e184344ef00ad6b782963eca67a8619e168f3c528e8344c5a98d8a7a5a93\" pid:2754 exit_status:1 exited_at:{seconds:1768953386 nanos:379490441}" Jan 20 23:56:26.377000 audit: BPF prog-id=83 op=UNLOAD Jan 20 23:56:26.381064 kernel: audit: type=1334 audit(1768953386.377:876): prog-id=257 op=LOAD Jan 20 23:56:26.381158 kernel: audit: type=1334 audit(1768953386.377:877): prog-id=83 op=UNLOAD Jan 20 23:56:26.381000 audit: BPF prog-id=98 op=UNLOAD Jan 20 23:56:26.381000 audit: BPF prog-id=102 op=UNLOAD Jan 20 23:56:26.386279 kernel: audit: type=1334 audit(1768953386.381:878): prog-id=98 op=UNLOAD Jan 20 23:56:26.386326 kernel: audit: type=1334 audit(1768953386.381:879): prog-id=102 op=UNLOAD Jan 20 23:56:26.406149 systemd[1]: run-containerd-io.containerd.runtime.v2.task-k8s.io-5f99e184344ef00ad6b782963eca67a8619e168f3c528e8344c5a98d8a7a5a93-rootfs.mount: Deactivated successfully. Jan 20 23:56:26.585559 kubelet[2940]: E0120 23:56:26.585193 2940 event.go:359] "Server rejected event (will not retry!)" err="rpc error: code = Unavailable desc = error reading from server: read tcp 10.0.2.209:46970->10.0.2.187:2379: read: connection timed out" event="&Event{ObjectMeta:{calico-apiserver-767c66c85d-czqtd.188c9595927eaf14 calico-apiserver 1725 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:calico-apiserver,Name:calico-apiserver-767c66c85d-czqtd,UID:ba92b217-c758-44c6-b97e-3beb84feb1eb,APIVersion:v1,ResourceVersion:830,FieldPath:spec.containers{calico-apiserver},},Reason:BackOff,Message:Back-off pulling image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\",Source:EventSource{Component:kubelet,Host:ci-4547-0-0-n-e5b472a427,},FirstTimestamp:2026-01-20 23:53:43 +0000 UTC,LastTimestamp:2026-01-20 23:56:16.125938449 +0000 UTC m=+196.100057504,Count:11,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:ci-4547-0-0-n-e5b472a427,}" Jan 20 23:56:26.655354 kubelet[2940]: I0120 23:56:26.655315 2940 scope.go:117] "RemoveContainer" containerID="02ef16ecc5dea6e9a8ded426c3fa79343cda1cd435b1cb0b97e5195d66406b37" Jan 20 23:56:26.658013 containerd[1670]: time="2026-01-20T23:56:26.657880357Z" level=info msg="CreateContainer within sandbox \"93bb9897e0a7b5a8e381cd2cb1281831cb5f12ebf87280ce7786963a4392c7c2\" for container &ContainerMetadata{Name:kube-scheduler,Attempt:1,}" Jan 20 23:56:26.658737 kubelet[2940]: I0120 23:56:26.658561 2940 scope.go:117] "RemoveContainer" containerID="cf9770ba8065ddc3f14caa7f069636d40cd69798a9f8d13f93b556b9dab42c86" Jan 20 23:56:26.659590 kubelet[2940]: I0120 23:56:26.659560 2940 scope.go:117] "RemoveContainer" containerID="5f99e184344ef00ad6b782963eca67a8619e168f3c528e8344c5a98d8a7a5a93" Jan 20 23:56:26.660565 containerd[1670]: time="2026-01-20T23:56:26.660480885Z" level=info msg="CreateContainer within sandbox \"10b4f8513b33e9c74e3ba307aa93cc843121721b3a87ede22fcac5ce1d6f6a0f\" for container &ContainerMetadata{Name:tigera-operator,Attempt:1,}" Jan 20 23:56:26.661978 containerd[1670]: time="2026-01-20T23:56:26.661946769Z" level=info msg="CreateContainer within sandbox \"06bc1d9e68cf7fa2d7588a02a6add80af6a7507268975adcde6b53facd687b7a\" for container &ContainerMetadata{Name:kube-controller-manager,Attempt:1,}" Jan 20 23:56:26.674725 containerd[1670]: time="2026-01-20T23:56:26.674691325Z" level=info msg="Container bf4f1539a45e6c46589b455d597ab28690320703570f2de027f5421f381a2512: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:56:26.676814 containerd[1670]: time="2026-01-20T23:56:26.676780731Z" level=info msg="Container 2e6a416ae528d10ba0077e8feb7d99b912d51d6717099653c7f1944c02829282: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:56:26.680939 containerd[1670]: time="2026-01-20T23:56:26.680907503Z" level=info msg="Container 71d2ef8b04376b776e2bdfb6aff4ef230dba0377a03a357da0919703886633a6: CDI devices from CRI Config.CDIDevices: []" Jan 20 23:56:26.690824 containerd[1670]: time="2026-01-20T23:56:26.690770811Z" level=info msg="CreateContainer within sandbox \"10b4f8513b33e9c74e3ba307aa93cc843121721b3a87ede22fcac5ce1d6f6a0f\" for &ContainerMetadata{Name:tigera-operator,Attempt:1,} returns container id \"2e6a416ae528d10ba0077e8feb7d99b912d51d6717099653c7f1944c02829282\"" Jan 20 23:56:26.691320 containerd[1670]: time="2026-01-20T23:56:26.691271973Z" level=info msg="StartContainer for \"2e6a416ae528d10ba0077e8feb7d99b912d51d6717099653c7f1944c02829282\"" Jan 20 23:56:26.692079 containerd[1670]: time="2026-01-20T23:56:26.692030215Z" level=info msg="connecting to shim 2e6a416ae528d10ba0077e8feb7d99b912d51d6717099653c7f1944c02829282" address="unix:///run/containerd/s/95ef60629d42e62855d3d23a825537ea6905bede6ea3b7f9e5c34c24d3a72036" protocol=ttrpc version=3 Jan 20 23:56:26.692272 containerd[1670]: time="2026-01-20T23:56:26.692233135Z" level=info msg="CreateContainer within sandbox \"93bb9897e0a7b5a8e381cd2cb1281831cb5f12ebf87280ce7786963a4392c7c2\" for &ContainerMetadata{Name:kube-scheduler,Attempt:1,} returns container id \"bf4f1539a45e6c46589b455d597ab28690320703570f2de027f5421f381a2512\"" Jan 20 23:56:26.693245 containerd[1670]: time="2026-01-20T23:56:26.693192818Z" level=info msg="StartContainer for \"bf4f1539a45e6c46589b455d597ab28690320703570f2de027f5421f381a2512\"" Jan 20 23:56:26.694300 containerd[1670]: time="2026-01-20T23:56:26.694269381Z" level=info msg="connecting to shim bf4f1539a45e6c46589b455d597ab28690320703570f2de027f5421f381a2512" address="unix:///run/containerd/s/77750dce380f3365929df256a0617d011d006192a7e7b031fdd8484e3499a26e" protocol=ttrpc version=3 Jan 20 23:56:26.696690 containerd[1670]: time="2026-01-20T23:56:26.696648028Z" level=info msg="CreateContainer within sandbox \"06bc1d9e68cf7fa2d7588a02a6add80af6a7507268975adcde6b53facd687b7a\" for &ContainerMetadata{Name:kube-controller-manager,Attempt:1,} returns container id \"71d2ef8b04376b776e2bdfb6aff4ef230dba0377a03a357da0919703886633a6\"" Jan 20 23:56:26.697472 containerd[1670]: time="2026-01-20T23:56:26.697331150Z" level=info msg="StartContainer for \"71d2ef8b04376b776e2bdfb6aff4ef230dba0377a03a357da0919703886633a6\"" Jan 20 23:56:26.698912 containerd[1670]: time="2026-01-20T23:56:26.698879434Z" level=info msg="connecting to shim 71d2ef8b04376b776e2bdfb6aff4ef230dba0377a03a357da0919703886633a6" address="unix:///run/containerd/s/cad77158e5571cd5da02ac3a46ad66df1e9ff9afaacb3604ec8322968ef95f59" protocol=ttrpc version=3 Jan 20 23:56:26.713186 systemd[1]: Started cri-containerd-2e6a416ae528d10ba0077e8feb7d99b912d51d6717099653c7f1944c02829282.scope - libcontainer container 2e6a416ae528d10ba0077e8feb7d99b912d51d6717099653c7f1944c02829282. Jan 20 23:56:26.716511 systemd[1]: Started cri-containerd-bf4f1539a45e6c46589b455d597ab28690320703570f2de027f5421f381a2512.scope - libcontainer container bf4f1539a45e6c46589b455d597ab28690320703570f2de027f5421f381a2512. Jan 20 23:56:26.720297 systemd[1]: Started cri-containerd-71d2ef8b04376b776e2bdfb6aff4ef230dba0377a03a357da0919703886633a6.scope - libcontainer container 71d2ef8b04376b776e2bdfb6aff4ef230dba0377a03a357da0919703886633a6. Jan 20 23:56:26.730000 audit: BPF prog-id=258 op=LOAD Jan 20 23:56:26.730000 audit: BPF prog-id=259 op=LOAD Jan 20 23:56:26.730000 audit[5830]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=3046 pid=5830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:26.730000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265366134313661653532386431306261303037376538666562376439 Jan 20 23:56:26.730000 audit: BPF prog-id=259 op=UNLOAD Jan 20 23:56:26.730000 audit[5830]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3046 pid=5830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:26.730000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265366134313661653532386431306261303037376538666562376439 Jan 20 23:56:26.730000 audit: BPF prog-id=260 op=LOAD Jan 20 23:56:26.730000 audit[5830]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=3046 pid=5830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:26.730000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265366134313661653532386431306261303037376538666562376439 Jan 20 23:56:26.730000 audit: BPF prog-id=261 op=LOAD Jan 20 23:56:26.730000 audit[5830]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=3046 pid=5830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:26.730000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265366134313661653532386431306261303037376538666562376439 Jan 20 23:56:26.731000 audit: BPF prog-id=261 op=UNLOAD Jan 20 23:56:26.731000 audit[5830]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=3046 pid=5830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:26.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265366134313661653532386431306261303037376538666562376439 Jan 20 23:56:26.731000 audit: BPF prog-id=260 op=UNLOAD Jan 20 23:56:26.731000 audit[5830]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=3046 pid=5830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:26.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265366134313661653532386431306261303037376538666562376439 Jan 20 23:56:26.731000 audit: BPF prog-id=262 op=LOAD Jan 20 23:56:26.731000 audit[5830]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=3046 pid=5830 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:26.731000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3265366134313661653532386431306261303037376538666562376439 Jan 20 23:56:26.731000 audit: BPF prog-id=263 op=LOAD Jan 20 23:56:26.732000 audit: BPF prog-id=264 op=LOAD Jan 20 23:56:26.732000 audit[5831]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178180 a2=98 a3=0 items=0 ppid=2643 pid=5831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:26.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266346631353339613435653663343635383962343535643539376162 Jan 20 23:56:26.732000 audit: BPF prog-id=264 op=UNLOAD Jan 20 23:56:26.732000 audit[5831]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2643 pid=5831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:26.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266346631353339613435653663343635383962343535643539376162 Jan 20 23:56:26.732000 audit: BPF prog-id=265 op=LOAD Jan 20 23:56:26.732000 audit[5831]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001783e8 a2=98 a3=0 items=0 ppid=2643 pid=5831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:26.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266346631353339613435653663343635383962343535643539376162 Jan 20 23:56:26.732000 audit: BPF prog-id=266 op=LOAD Jan 20 23:56:26.732000 audit[5831]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=4000178168 a2=98 a3=0 items=0 ppid=2643 pid=5831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:26.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266346631353339613435653663343635383962343535643539376162 Jan 20 23:56:26.732000 audit: BPF prog-id=266 op=UNLOAD Jan 20 23:56:26.732000 audit[5831]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2643 pid=5831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:26.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266346631353339613435653663343635383962343535643539376162 Jan 20 23:56:26.732000 audit: BPF prog-id=265 op=UNLOAD Jan 20 23:56:26.732000 audit[5831]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2643 pid=5831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:26.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266346631353339613435653663343635383962343535643539376162 Jan 20 23:56:26.732000 audit: BPF prog-id=267 op=LOAD Jan 20 23:56:26.732000 audit[5831]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=4000178648 a2=98 a3=0 items=0 ppid=2643 pid=5831 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:26.732000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F6266346631353339613435653663343635383962343535643539376162 Jan 20 23:56:26.738000 audit: BPF prog-id=268 op=LOAD Jan 20 23:56:26.739000 audit: BPF prog-id=269 op=LOAD Jan 20 23:56:26.739000 audit[5847]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0180 a2=98 a3=0 items=0 ppid=2608 pid=5847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:26.739000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731643265663862303433373662373736653262646662366166663465 Jan 20 23:56:26.739000 audit: BPF prog-id=269 op=UNLOAD Jan 20 23:56:26.739000 audit[5847]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2608 pid=5847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:26.739000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731643265663862303433373662373736653262646662366166663465 Jan 20 23:56:26.739000 audit: BPF prog-id=270 op=LOAD Jan 20 23:56:26.739000 audit[5847]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b03e8 a2=98 a3=0 items=0 ppid=2608 pid=5847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:26.739000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731643265663862303433373662373736653262646662366166663465 Jan 20 23:56:26.740000 audit: BPF prog-id=271 op=LOAD Jan 20 23:56:26.740000 audit[5847]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=23 a0=5 a1=40001b0168 a2=98 a3=0 items=0 ppid=2608 pid=5847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:26.740000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731643265663862303433373662373736653262646662366166663465 Jan 20 23:56:26.740000 audit: BPF prog-id=271 op=UNLOAD Jan 20 23:56:26.740000 audit[5847]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=17 a1=0 a2=0 a3=0 items=0 ppid=2608 pid=5847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:26.740000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731643265663862303433373662373736653262646662366166663465 Jan 20 23:56:26.740000 audit: BPF prog-id=270 op=UNLOAD Jan 20 23:56:26.740000 audit[5847]: SYSCALL arch=c00000b7 syscall=57 success=yes exit=0 a0=15 a1=0 a2=0 a3=0 items=0 ppid=2608 pid=5847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:26.740000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731643265663862303433373662373736653262646662366166663465 Jan 20 23:56:26.741000 audit: BPF prog-id=272 op=LOAD Jan 20 23:56:26.741000 audit[5847]: SYSCALL arch=c00000b7 syscall=280 success=yes exit=21 a0=5 a1=40001b0648 a2=98 a3=0 items=0 ppid=2608 pid=5847 auid=4294967295 uid=0 gid=0 euid=0 suid=0 fsuid=0 egid=0 sgid=0 fsgid=0 tty=(none) ses=4294967295 comm="runc" exe="/usr/bin/runc" subj=system_u:system_r:kernel_t:s0 key=(null) Jan 20 23:56:26.741000 audit: PROCTITLE proctitle=72756E63002D2D726F6F74002F72756E2F636F6E7461696E6572642F72756E632F6B38732E696F002D2D6C6F67002F72756E2F636F6E7461696E6572642F696F2E636F6E7461696E6572642E72756E74696D652E76322E7461736B2F6B38732E696F2F3731643265663862303433373662373736653262646662366166663465 Jan 20 23:56:26.766494 containerd[1670]: time="2026-01-20T23:56:26.766428827Z" level=info msg="StartContainer for \"2e6a416ae528d10ba0077e8feb7d99b912d51d6717099653c7f1944c02829282\" returns successfully" Jan 20 23:56:26.774366 containerd[1670]: time="2026-01-20T23:56:26.774255170Z" level=info msg="StartContainer for \"bf4f1539a45e6c46589b455d597ab28690320703570f2de027f5421f381a2512\" returns successfully" Jan 20 23:56:26.780388 containerd[1670]: time="2026-01-20T23:56:26.780052026Z" level=info msg="StartContainer for \"71d2ef8b04376b776e2bdfb6aff4ef230dba0377a03a357da0919703886633a6\" returns successfully" Jan 20 23:56:27.126658 containerd[1670]: time="2026-01-20T23:56:27.126389456Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\"" Jan 20 23:56:27.453530 containerd[1670]: time="2026-01-20T23:56:27.453438551Z" level=info msg="fetch failed after status: 404 Not Found" host=ghcr.io Jan 20 23:56:27.454871 containerd[1670]: time="2026-01-20T23:56:27.454839395Z" level=error msg="PullImage \"ghcr.io/flatcar/calico/apiserver:v3.30.4\" failed" error="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" Jan 20 23:56:27.454946 containerd[1670]: time="2026-01-20T23:56:27.454911476Z" level=info msg="stop pulling image ghcr.io/flatcar/calico/apiserver:v3.30.4: active requests=0, bytes read=0" Jan 20 23:56:27.455101 kubelet[2940]: E0120 23:56:27.455062 2940 log.go:32] "PullImage from image service failed" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:56:27.455378 kubelet[2940]: E0120 23:56:27.455112 2940 kuberuntime_image.go:42] "Failed to pull image" err="rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" image="ghcr.io/flatcar/calico/apiserver:v3.30.4" Jan 20 23:56:27.455378 kubelet[2940]: E0120 23:56:27.455232 2940 kuberuntime_manager.go:1358] "Unhandled Error" err="container &Container{Name:calico-apiserver,Image:ghcr.io/flatcar/calico/apiserver:v3.30.4,Command:[],Args:[--secure-port=5443 --tls-private-key-file=/calico-apiserver-certs/tls.key --tls-cert-file=/calico-apiserver-certs/tls.crt],WorkingDir:,Ports:[]ContainerPort{},Env:[]EnvVar{EnvVar{Name:DATASTORE_TYPE,Value:kubernetes,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_HOST,Value:10.96.0.1,ValueFrom:nil,},EnvVar{Name:KUBERNETES_SERVICE_PORT,Value:443,ValueFrom:nil,},EnvVar{Name:LOG_LEVEL,Value:info,ValueFrom:nil,},EnvVar{Name:MULTI_INTERFACE_MODE,Value:none,ValueFrom:nil,},},Resources:ResourceRequirements{Limits:ResourceList{},Requests:ResourceList{},Claims:[]ResourceClaim{},},VolumeMounts:[]VolumeMount{VolumeMount{Name:calico-apiserver-certs,ReadOnly:true,MountPath:/calico-apiserver-certs,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},VolumeMount{Name:kube-api-access-ggz5c,ReadOnly:true,MountPath:/var/run/secrets/kubernetes.io/serviceaccount,SubPath:,MountPropagation:nil,SubPathExpr:,RecursiveReadOnly:nil,},},LivenessProbe:nil,ReadinessProbe:&Probe{ProbeHandler:ProbeHandler{Exec:nil,HTTPGet:&HTTPGetAction{Path:/readyz,Port:{0 5443 },Host:,Scheme:HTTPS,HTTPHeaders:[]HTTPHeader{},},TCPSocket:nil,GRPC:nil,},InitialDelaySeconds:0,TimeoutSeconds:5,PeriodSeconds:60,SuccessThreshold:1,FailureThreshold:3,TerminationGracePeriodSeconds:nil,},Lifecycle:nil,TerminationMessagePath:/dev/termination-log,ImagePullPolicy:IfNotPresent,SecurityContext:&SecurityContext{Capabilities:&Capabilities{Add:[],Drop:[ALL],},Privileged:*false,SELinuxOptions:nil,RunAsUser:*10001,RunAsNonRoot:*true,ReadOnlyRootFilesystem:nil,AllowPrivilegeEscalation:*false,RunAsGroup:*10001,ProcMount:nil,WindowsOptions:nil,SeccompProfile:&SeccompProfile{Type:RuntimeDefault,LocalhostProfile:nil,},AppArmorProfile:nil,},Stdin:false,StdinOnce:false,TTY:false,EnvFrom:[]EnvFromSource{},TerminationMessagePolicy:File,VolumeDevices:[]VolumeDevice{},StartupProbe:nil,ResizePolicy:[]ContainerResizePolicy{},RestartPolicy:nil,} start failed in pod calico-apiserver-767c66c85d-czqtd_calico-apiserver(ba92b217-c758-44c6-b97e-3beb84feb1eb): ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \"ghcr.io/flatcar/calico/apiserver:v3.30.4\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found" logger="UnhandledError" Jan 20 23:56:27.456422 kubelet[2940]: E0120 23:56:27.456385 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"calico-apiserver\" with ErrImagePull: \"rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/apiserver:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/apiserver:v3.30.4: not found\"" pod="calico-apiserver/calico-apiserver-767c66c85d-czqtd" podUID="ba92b217-c758-44c6-b97e-3beb84feb1eb" Jan 20 23:56:32.127525 kubelet[2940]: E0120 23:56:32.127444 2940 pod_workers.go:1301] "Error syncing pod, skipping" err="[failed to \"StartContainer\" for \"whisker\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker:v3.30.4: not found\", failed to \"StartContainer\" for \"whisker-backend\" with ImagePullBackOff: \"Back-off pulling image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": ErrImagePull: rpc error: code = NotFound desc = failed to pull and unpack image \\\"ghcr.io/flatcar/calico/whisker-backend:v3.30.4\\\": failed to resolve image: ghcr.io/flatcar/calico/whisker-backend:v3.30.4: not found\"]" pod="calico-system/whisker-777bdc9c5f-sfvx4" podUID="69215543-8df6-43c3-9b3c-95e532549500" Jan 20 23:56:36.126565 containerd[1670]: time="2026-01-20T23:56:36.126185185Z" level=info msg="PullImage \"ghcr.io/flatcar/calico/csi:v3.30.4\"" Jan 20 23:56:36.203151 kubelet[2940]: E0120 23:56:36.202903 2940 controller.go:195] "Failed to update lease" err="Put \"https://10.0.2.209:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/ci-4547-0-0-n-e5b472a427?timeout=10s\": context deadline exceeded"